May 302017
 

I’m going to try to make a nice easy introduction to my two favorite functions in Houdini VEX (besides fit01 and chramp of course): xyzdist and primuv. These functions are at the core of a lot of really useful and cool tricks in Houdini, including rivets, the attributeInterpolate SOP, the old “droplets falling down a soda can” effect, and some really awesome stuff with volume shaders. I’ll do a little example of each as a way of showing off what you can do with these clever little tools.

First, let’s take a look at the VEX definition (the third overload here is the most frequently used):
float xyzdist(string geometry, vector pt, int &prim, vector &uv, float maxdist)

At its most basic, xyzdist will return the distance from the sample point pt to the nearest point on the surface geometry. Note that this doesn’t mean the nearest actual point, but the interpolated surface in between those points.

Those little “&” symbols mean that this function will write to those parameters, rather than just read from them. So if we feed this function an integer and a vector, in addition to the distance to the surface, it will also give us the primitive number prim and the parametric UVs on that primitive uv. Note that parametric UVs are not the same as regular UVs… this just means the normalized position relative to the individual primitive we found.

So, what can we do with this? Click below to find out… Continue reading »

Dec 292016
 

My good friend and motion graphics bromance, Eli Guerron, asked me to help him create a procedural system of branching circuits that would look something like a schematic drawing. I stupidly thought this would be an easy trick with particles, but to get the right look it actually took quite a lot of experimenting.

15778192_1530448493636321_110072566_o

The reference video he showed me showed branching structures that would occasionally turn or split apart at precise angles. This part was easy enough to figure out, but the real trick was preventing collisions between branches. I first thought that I could use a modified version of the space colonization algorithm to create the non-intersecting patterns, but even after writing a function to restrict the growth directions to increments of 45 degrees (explained below), I couldn’t get the patterns to look right. Part of the reason the reference image looks “schematic-y” is because the traces will run alongside each other in parallel, sometimes splitting apart but generally running in groups. Space colonization just doesn’t grow that way, so I had to throw that method out.

The method that worked the best (it’s still not perfect) is just growing polylines in a SOP Solver, completely manually in VEX snippets. Each point has a starting velocity vector (this is left over from when I originally tried to solve this with particles), and on each timestep the endpoints will duplicate themselves, then add their velocity to their point position, and draw a polyline between the original and the duplicate. This forms the basis of the traces. The code for this is pretty straightforward:

// create new point, inheriting attrs from this point
int newpt = addpoint(0,@ptnum);
// assign new position to new point
vector newv = normalize(v@v) * speed;
vector newpos = @P + (newv);
setpointattrib(0,"P",newpt,newpos,"set");
setpointattrib(0,"v",newpt,newv,"set");
// add primitive to connect new point to current point
int newprim = addprim(0,"polyline");
addvertex(0,newprim,@ptnum);
addvertex(0,newprim,newpt);
// remove old point from "growth" group...
// this can be done by just setting a point attribute

The next step is to get the traces to randomly zigzag at 45 degree angles. Since I’m dealing with vectors here, rotating vectors is most easily done (IMO) via a matrix. The math really isn’t too bad, but there’s a few steps that need to happen to make this work.

First, I need to create a generic identity matrix. This is just a matrix for the point that, when multiplied against the point position (or any other vector attribute, such as velocity), returns the same vector right back. Creating an identity matrix is easy:

matrix3 m = ident();

Next, I want to generate an angle of 45 degrees. When dealing with vector math in general, you typically want radians, not degrees. 45 degrees is equivalent to π/4 radians, which I can then randomly negate to create either a positive or negative angle.

matrix3 m = ident();
float PI = 3.14159;
float angle = (PI * 0.25);
if(rand(@ptnum*@Time+seed) < 0.5) {
    angle *= -1;
}

Now I need to define an axis to rotate my angle around. Since my schematic is growing on the XZ plane, the axis to bend around would be the Y axis, or the vector {0,1,0}. Once I have the axis, the angle, and a matrix, I can use VEX’s rotate() function to rotate my identity matrix. All you have to do at that point is multiply any vector by that matrix, and it will rotate to the specified angle. Keep in mind that for predictable results, your vector should be normalized before multiplying it against a rotation (3×3) matrix. You can always multiply the resulting vector against the length of the original vector, once you’re done.

matrix3 m = ident();
float PI = 3.14159;
float angle = (PI * 0.25);
if(rand(@ptnum*@Time+seed) < 0.5) {
    angle *= -1;
}
vector up = {0,1,0};
rotate(m,angle,up);
newv = (normalize(newv) * m) * speed;
setpointattrib(0,"v",newpt,newv,"set");

The branching step is almost exactly the same. I just randomly generate a new point and rotate its velocity by +/- 45 degrees, but don’t remove the original point from the growth group. (In my example HIP, the “tips” growth group is defined by being red, while the static group is green. The “tips” group is redefined at the beginning of each step based on color.)

This setup alone gets some pretty cool patterns, but it’s messy without collision detection:
untitled

The collision detection is where things get tricky. I wanted growth points to look ahead of themselves along their velocity vector, and if they detected a collision, to try to match the collision surface’s velocity. This process repeats in a loop until the point can verify that nothing is blocking future growth. If the loop exceeds a maximum number of tries, it kills the branch by removing the point from the growth group.

The easiest way to “look ahead” and grab the velocity from collided polylines was to use the intersect() function. This returns a primitive number (or -1 if there was no collision), and parametric UV coordinates at the collision site. The primitive number and UV coordinates can then be fed into the primuv() function, which can be used to grab any attribute value at those exact coordinates. The velocity value at the collision point can then be assigned to the growth point so it moves in the same direction.

// find intersection along velocity vector.
// if an intersection is found, inherit that point's
// velocity attribute.
// repeat this up to "maxtries" to resolve collisions.
// if there is still a collision when maxtries is reached,
// remove this particle from the tips group (turn green).

vector interP;
vector forward = {1,0,0};
vector up = {0,1,0};
float PI = 3.14159;
float interU;
float interV;
int maxtries = chi("max_tries");
int try = 0;

while(try <= maxtries) {
    int primnum = intersect(1,@P,v@v*4,interP,interU,interV);
    if(primnum != -1) {
        // we collided
        // get velocity of collision point
        vector uv = set(interU,interV,0);
        vector collideV = primuv(1,"v",primnum,uv);
        v@v = collideV;
        // if this was the last try, this line has to stop
        if(try==maxtries) {
            // turn green to remove from tips on next step
            @Cd = {0,1,0};
        }
    }
    try+=1;
}

This works… okay, but because the points sometimes collide in between points that are at angles to each other, every once in a while the primuv() function will return angles that are a little “fuzzy,” meaning not at 45-degree angles. It doesn’t happen often, but when it does happen, you start seeing some weird curvy lines. So as a last step, the velocity vectors need to be locked to 45-degree angles, just in case.

First, I define a forward and up vector. Forward is +X (1,0,0), up is +Y (0,1,0). The forward vector is to have an axis to compare the velocity against… since this schematic is being drawn on the XZ plane, the +X axis is just a convenient starting point for figuring out rotations. To rotate angles on this XZ plane, I use the Y axis (up).

Figuring out the angle between two vectors is straightforward. The formula is this:

float theta = acos( dot(V1,V2) );

This returns the angle (typically called theta) between vectors V1 and V2, in radians. Because of the way this calculation works, it’s only going to properly return angles for vectors pointing towards +Z, so if the velocity has a negative Z component, we just multiply the angle by -1.

float angle = acos( dot( normalize(v@v),forward) );
if(v@v.z < 0) {
    angle *= -1;
}

Next, we need to figure out how far away our angle is from a 45-degree angle. We can do this by using the modulo operator to figure out the remainder when we divide our angle by (π/4).

float angle = acos( dot( normalize(v@v),forward) );
float rem = angle % (PI * 0.25);
angle -= rem;
if(v@v.z < 0) {
    angle *= -1;
}

Now we have a nice 45-degree-divisible angle relative to +X (or 0 radians). We need to turn this into a unit vector, which we can then multiply against our original velocity vector’s length to get the final velocity.

Check out this circle diagram to see what we’re doing here:
circle-unit-sct

Substitute Y with Z and you can already see the exact formula we’re going to use. Thanks to the unit circle, we can easily plot out where on the circle we’ll be with a little more trig. Then we just have to multiply that against the original velocity’s magnitude (length).

float angle = acos( dot( normalize(v@v),forward) );
float rem = angle % (PI * 0.25);
angle -= rem;
if(v@v.z < 0) {
    angle *= -1;
}
vector newV = set(cos(angle),0,sin(angle)) * length(v@v);
v@v = newV;

The final result is pretty cool! I still don’t like how some of the branches get too close to each other, and points can still intersect each other when two tips collide with each other on the same point, but it’s pretty close to the original setup. I’d like to eventually figure out a better way to keep the spacing between traces a little more consistent (right now they tend to sometimes bunch up tighter than I’d like), but that will have to happen later on.

Here’s the HIP file, if you want to play with it!

 

Apr 012016
 

As part of the pipeline I’m currently writing for Timber, I’ve been working on an input/output system that can pass data back and forth between Maya and Houdini. The goal is to have a Maya artist send an Alembic scene to Houdini, process the scene in Houdini, then send the modified scene back to Maya. The problem is when you want the scene to be recognizable at all, and also easy to work with in Houdini.

There are several steps to making this process work, and you’re better off scripting as much as you can because it’s annoying and repetitive. Also, the method I’m using will preserve the scene hierarchy, but it will not preserve pivots or transform information (everything becomes baked to world space). Hopefully there will be a better way to manage that soon.

First off, when you import the Alembic, use an Alembic Archive geometry container. Disable “Build Hierarchy Using Subnetworks” and then build the hierarchy. Instead of packing each child transform inside subnetworks for the whole Alembic hierarhcy, you should get a flattened hierarchy inside your Alembic Archive that looks like Fig. 1.

Fig. 1. A flattened Alembic subnet. All the geometry exists inside this subnetwork, with no child subnetworks.

Fig. 1. A flattened Alembic subnet. All the geometry exists inside this subnetwork, with no child subnetworks.

The reason you want this flattened hierarchy is so that you can merge everything into another network easily. I prefer to import all my data in one place, then Object Merge it into another network to modify it in order to keep everything nice and readable. Since everything is flat, on the Object Merge node you can just merge in /obj/alembicarchive1/* and everything should appear nicely. In order to preserve the original hierarchy for export, though, we’re going to want to enable “Create Per-Primitive Path” so that a primitive attribute objname appears on every packed Alembic primitive we’re reading into the scene.

This objname parameter isn’t storing exactly what we want, though. It’s storing the path to the SOP in Houdini’s terms, not the path the object had in the original export. If you look inside the Alembic Archive you originally created and click on any of the Alembic nodes inside, you can see that there’s an objectPath parameter that is storing the original Alembic path. We just need to get this value for each primitive. Since we have objname already stored, it’s not too hard to get objectPath from each packed primitive. We just need to use a little VEX (don’t worry).

Drop down an Attribute Wrangle after your Object Merge, and try the following code:

string objname;
objname = prim(geoself(),"objname",@primnum);
// get objectPath attribute from the parent node of objname
string channame = sprintf(objname + "/objectPath");
string objpath = chs(channame);
s@abcPath = objpath;

First we fetch the Houdini object path (stored as objname) on each primitive. Next, we generate a string for the channel name that we want to query, which is objname + "/objectPath", the channel name that contains the original Alembic path of the object. We then use chs() to grab the value of that channel, and store it as a primitive string attribute called abcPath. Not too bad. If you look at the object now in the Geometry Spreadsheet, you can see that each primitive now has a string attribute pointing to the Houdini path as well as the original Alembic path of each object.

There’s another little catch here. We’re probably going to want to convert this geometry to regular Houdini geometry, since there’s not a lot we can do with packed primitives (or polygon soups, depending on what you’re trying to do). So we’ll need to append an Unpack SOP. When you unpack, though, you’ll notice your primitive attributes are gone. To transfer the primitive attribute onto the newly unpacked polygon soups, just enter abcPath into the Transfer Attributes parameter on the Unpack SOP. Now every primitive should have the correct attribute, and you can go ahead and drop down a Convert SOP to convert everything into standard Houdini geometry.

You can go ahead and smash up the geometry to your heart’s content now, as long as you maintain those primitive attributes. When you’re done and it’s time to export an Alembic, all you have to do on the Alembic ROP is select “Build Hierarchy From Attribute” and type in abcPath as your Path attribute. If you were to load the exported .ABC file back into Maya, you should see that the object names and their hierarchical relationship to each other should be unchanged (although your pivots will all be at the origin, and the transforms frozen). Now your Maya team won’t freak out when everything in your Houdini scene comes in as one giant unrecognizable object!

Mar 212016
 

I’m trying to build a system right now that can automatically substitute environment variables for paths on a per-scene basis. When a user opens a Maya file, I want to parse that file and set temporary environment variables called $SHOW, $SEQ and $SHOT that point to the show’s root folder, the sequence’s folder in that show, or the shot in that sequence. The variable names and paths I’m trying to get are pretty specific to the current pipeline I’m working on, but this essentially lets me use relative paths in Maya but WITHOUT using Maya’s outdated and inflexible workspace system. I don’t want an individual Maya project for every single shot and asset in the entire show, and I don’t want all of my Maya scenes to be forced to live within a single workspace, either.

I’ve solved this problem before by using symbolic links to basically force all of the Maya projects generated for every single shot and asset to link to the same place. This makes for a pretty busy-looking file system, though, and symlinks only work on Unix-based servers (i.e. not Windows). This system I’m building now looks a lot more like Houdini’s global variables… as long as I define $SHOW or $SHOT or whatever, I can path a reference or a texture to $SHOT/assets/whatever.xyz and it doesn’t necessarily have to be within a Maya workspace, and it’s also not an absolute path.

Read more below… Continue reading »

Mar 112016
 

A while ago I posted about how to manage viewports in Maya in order to prevent the screen from redrawing while doing a bake or Alembic export, which slows things down considerably. It turns out there’s a way easier way to handle this, thanks to a hidden (of course) MEL variable that Maya docs don’t seem to mention anywhere.

The command is:

paneLayout -e -manage false $gMainPane

The variable $gMainPane is a shortcut that just refers to Maya’s main viewport window. You can set the “managed” state of the entire pane to False, which simply disables refreshing it until you set it back to True. Just one line of code!

Here’s another, even easier method, that’s actually part of the Python cmds module:

cmds.refresh(suspend=True)

There’s a catch, though. From the docs on the refresh() command:

Suspends or resumes Maya’s handling of refresh events. Specify “on” to suspend refreshing, and “off” to resume refreshing. Note that resuming refresh does not itself cause a refresh — the next natural refresh event in Maya after “refresh -suspend off” is issued will cause the refresh to occur. Use this flag with caution: although it provides opportunities to enhance performance, much of Maya’s dependency graph evaluation in interactive mode is refresh driven, thus use of this flag may lead to slight solve differences when you have a complex dependency graph with interrelations.

So make sure you test each method with different kinds of bakes before you commit to any one solution.

Feb 252016
 

Last year I worked on a Vikings teaser for King and Country over in Santa Monica. The spot was mostly live-action, but there were a bunch of CG snakes that needed to be added in post, and so I finally got to flex my rigging muscles a little bit.

First, here’s the spot:

Snakes seem like simple problems to solve, since they don’t have shoulders or hips or any other nasty bits that are hard to rig on humans, but the problem lies within the lack of control an animator has over a typical IK spline.

Most simple snake rigs are just that… make an IK spline, cluster the curve, let the animator sort out the rest. Maybe the animator will get lucky and there will be some kind of control hierarchy, but otherwise they’re in for a lot of counter-animating hell. IK splines also suffer from a lack of precision twisting… snakes (especially when you have big piles of them) tend to need to have different twisting rotations along the length of the body, and IK splines can only twist linearly from start to end. Stretching them also typically results in unstable behavior, with the end joint stretching well beyond the intended values, especially when the spline curve is bent quite a bit.

Click below for more details…
Continue reading »

Oct 152015
 

I realize this effect was done years and years ago by smarter people than me (JT Nimoy, Adam Swaab, among others) but I’m a big nerd for procedural art and wanted to see how I could create the Quorra Heart effect described on Nimoy’s blog here. The effect itself isn’t terribly complicated to do in Houdini, but it took me a while to figure out the best approach (thanks Ray).

Originally I wrote a function in VEX to sample a volume in cross-sections. The volume was just a sphere multiplied against a 3D Perlin noise function. I stepped through the volume grid in a loop, checking to see if the volume’s value was close to the isosurface value I wanted, and then added points where it was. Here’s the code:


float minX = -.5;
float maxX = .5;
float minY = -.5;
float maxY = .5;
float minZ = -.5;
float maxZ = .5;
float interval = 0.004;
float isovalue = 0.001;

int zslices = 15;

// calculate z interval
float zint = abs(minZ - maxZ) / zslices;

for(float x=minX; x<maxX; x+=interval) {
    for(float y=minY; y<maxY; y+=interval) {
        for(float z=minZ; z<maxZ; z+= zint) {
            vector pos;
            pos.x = x;
            pos.y = y;
            pos.z = z;
            //printf("%f",pos);
            float s = volumesample(1,"density",pos);
            if(abs(s) < isovalue) {
                int newpt = addpoint(geoself(), pos);
                setpointattrib(geoself(), "sdfvalue", newpt, s);
            }
        }
    }
}

Nothing too fancy there. Since the volume here is an SDF, I’m establishing a boundary condition around the isosurface value (the value of an SDF volume at the surface is zero) and checking to see if the current voxel is really really close to zero. If it is, I draw a point. That’s it.

This method outlined the shape in an interesting way, but it wasn’t even close to the original effect, and the points weren’t connected to each other which was problematic. I was looking at the problem the wrong way; I didn’t want to sample the volume by planes, I wanted to sample the volume by spheres!

I decided to scrap the VEX approach and do this visually. Houdini’s Cookie SOP can extract curves from the areas where two objects intersect. I just had to create a bunch of densely-packed concentric spheres, extract the intersection curves using the Cookie SOP with the volume converted to a polygon mesh, and then it was done! Very simple effect when it comes down to it.

For the final animation below, I had both concentric spheres and a stacked group of planes intersecting with the volume. The animation is just the Flow parameter of the Flow Noise VOP keyframed from zero to one… this means the animation can loop seamlessly. There’s also a little bit of VOPs work to add color to the intersection points; it’s just a distance lookup between the point’s position and the center of the sphere, fit to a 0-1 range and used to drive a ramp. You could optionally drive the point color based on the point’s distance to the SDF surface by using a Volume Sample From File VOP in a similar manner (the closer your volume sample is to zero, the closer you are to the surface).

Here’s the animation below.

isosurface intersections test 2 from Toadstorm Inc on Vimeo.

I’d like to eventually try to get this working on the GPU in TouchDesigner, but I’m still a total beginner to GLSL so it’ll be a while before I have the chops. If anyone has any tips there I’d love to hear them.

Jun 242015
 

A quick script here to convert Gizmos in Nuke into Groups. Gizmos are nice in theory in that they provide reusable groups of nodes for users to interact with, but they’re often more trouble than they’re worth, especially if you need to send your Nuke scripts to someone else. Here’s a script that just converts all the Gizmos in your scene into groups seamlessly:

import nuke

def isGizmo(node):
    return 'gizmo_file' in node.knobs()
    
def getGizmos():
    allNodes = nuke.allNodes()
    if allNodes:
        gizmos = [f for f in allNodes if isGizmo(f)]
        if gizmos:
            return gizmos
    return False

def deselectAll():
    # deselect all nodes.
    for i in nuke.allNodes():
        i.knob('selected').setValue(False)
        
def convertGizmoToGroup(gizmo):
    # convert a gizmo to an identical group.
    # delete the original, reconnect the group in the chain.
    # rename the group to match the original.
    inputs = []
    for x in range(0, gizmo.maximumInputs()):
        if gizmo.input(x):
            # print 'input: %s' % (gizmo.input(x).knob('name').value())
            inputs.append(gizmo.input(x))
        else:
            inputs.append(False)
    origName = gizmo.knob('name').value()
    origPosX = gizmo.xpos()
    origPosY = gizmo.ypos()
    deselectAll()
    # select knob, then run gizmo.makeGroup()
    gizmo.knob('selected').setValue(True)
    newGroup = gizmo.makeGroup()
    deselectAll()
    # delete original
    nuke.delete(gizmo)
    newGroup.knob('name').setValue(origName)
    newGroup['xpos'].setValue(origPosX)
    newGroup['ypos'].setValue(origPosY)
    # disconnect old inputs
    # reconnect inputs
    for x in range(0, newGroup.maximumInputs()):
        newGroup.setInput(x,None) 
        if inputs[x]:
            newGroup.connectInput(x, inputs[x])
        # print 'connecting output: %s to input: %s' % (inputs[x].knob('name').value(), newGroup.input(x).name())
    return newGroup
    
def convertGizmosToGroups():
    gizmos = getGizmos()
    if gizmos:
        for i in gizmos:
            convertGizmoToGroup(i)

There’s four methods here: first, a method to determine whether or not a node is a gizmo (by looking for a “gizmo_file” knob). Second, a method to find all gizmos by running isGizmo on every node in nuke.allNodes(). Third, the big function that does most of the heavy lifting. There’s a built-in API call that can run on gizmos called makeGroup() but by itself it’s not incredibly useful; it just wedges a converted copy of your Gizmo into the flow without doing anything to the original node, and it doesn’t name the group or anything. So this script is just adding onto that API call.

The first few lines of convertGizmoToGroup are creating an array of inputs so that we know what nodes are connected to the inputs of the gizmo. We’re just grabbing references to each input and stuffing it into the array. Next we record the name and position in the node graph of the original gizmo. We deselect everything and then set the “selected” knob of the gizmo to “True” (why we don’t have a real selection API call in Nuke, I’ll never know). Then we run the magic gizmo.makeGroup() command, which converts the gizmo into a group, but potentially leaves inputs disconnected, doesn’t delete the original gizmo, and doesn’t rename the new group. We do that in the next few lines; first deleting the original, then renaming our newGroup to match the original name, then setting the node graph position of the new group to match the original’s so it doesn’t mess up your organization. Finally, we loop through our array of inputs and connect them (in order) to the new group node. Pretty simple!

The last function is just a loop that converts every gizmo in the scene. You could import this module using menu.py and turn it into a button on a toolbar to quickly convert the whole scene from gizmos to groups by calling convertGizmosToGroups.

Hope it’s useful!

May 182015
 

I’ve implemented most of my planned pipeline, finally. Just want to do a brief walkthrough to brag about features to show what this set of tools is capable of doing.

The pipeline uses symlinks to allow each asset or shot to exist inside its own self-contained Maya project workspace, while linking together key directories in those workspaces to all point towards the same repository. This way, textures can all be found in a single location, cache data can be shared between departments (Animation, Lighting, etc), and renders can all be easily found in a single place, without confusing Maya by rendering to paths outside of the workspace.

All project-level commands can either be run from a Linux terminal or from the web interface mentioned in the previous post. New project structures are generated using a master XML file that defines folder names/hierarchies and permissions bits for each folder, to prevent regular users from writing files where they shouldn’t.

The File Manager handles assets and shots. Assets can be “tagged” in order to categorize them in the menu on the left. These tags are arbitrary, can be changed at any time, and don’t affect the file structure at all, so the way assets are organized can evolve fluidly without screwing up references. Asset versions are organized using a public/private system, so users do whatever they need to do inside their own work folders, then “publish” the asset, which automatically is assigned a name and version number based on the department it’s being published to (such as Modeling or Rigging). Artists using these assets can automatically receive updates to their references without dealing with a “master” asset constantly being overwritten and screwing up in-progress renders or simulations. Each version has notes associated with it, and for published versions an automatic note tracing the file back to the work file it was originally saved from.

assetManager

The Assets tab of the File Manager.

The Animation I/O tool handles the import/export of animation data, including cameras. Animation can be exported either as baked curve data (Maya ASCII), or as Alembic caches. The hybrid approach is to maintain a light server presence and flexibility, since blindly caching Alembic data for everything can seriously bog down a server, not to mention the time it takes to actually write an Alembic cache in Maya. Animation is written out to shared cache folders named after the shot they were exported from, and animation exported this way can be automatically updated using the import tools. Lighters don’t even have to know what assets are supposed to be in their scenes; the Match Exported Scene button will automatically pull in references as needed when importing data (even when the data is not Alembic). A single click will make sure that a lighter’s scene exactly matches the animator’s scene (minus all the junk that animators like to leave around).

The Look manager tool handles shading and render layers. A user can shade, apply render layers, per-layer overrides, displacement sets, etc., and then save those customizations to a Look file. This Look can then be applied to an identical (or nearly-identical) asset anywhere else in the project. Materials are saved to a separate file that is then referenced into other scenes by the Look Manager. If you have multiple objects sharing the same Look, they will shared the shader reference, helping the Hypershade stay as clean as possible. Shading and layer assignments are handled by object name, so if your receiving asset is named slightly differently (Asset_A vs. Asset_B) you can use a find/replace string to substitute object names. Shading in this pipeline is completely independent of lighting and can be adjusted and modified (aside from UV adjustments) anywhere, then saved as a Look for use by others. When Looks are removed or replaced, the Look Manager automatically removes unused render layers and displacement sets, as well as referenced materials files that are no longer in use.

The Look Manager interface.

The Look Manager interface.

The Preflight system helps keep assets and scenes conforming to pipeline structure. A series of checks is run, with presets for checks available based on department (Modeling, Rigging, Lighting, etc). Each check will be highlighted if there is a problem, which can then be fixed with an automated function. Each check maintains its own viewable log, so users can see exactly what the problems are and what is fixed if the function is run. The checks are dynamically loaded from a folder, and each check is based on a template. A pipeline operator or power user could add their own checks to this folder as long as they fit the template, and they will be automatically loaded into Preflight at next launch.

Renders, once finished, can be quickly loaded into an Auto-Comp generated in Nuke or NukeAssist. The ReadGroup script creates a special group that will take all single-channel passes in a folder and combine the channels together into a single stream. The entire group of Read nodes can be updated simultaneously by picking a new folder on the ReadGroup, rather than having to manually replace each individual channel. Another button on the ReadGroup automatically replaces each main sequence type (Beauty, Tech, and Deep pass groups) with the latest available version. Another click quickly generates a composite out of the channels within, adding or multiplying channels depending on the channel names (this function assumes a VRay pipeline but could be expanded to work with Arnold or mental ray) and creating a fast comp along with a contact sheet to check passes quickly. Lighters can use this tool to make sure their passes rendered correctly, and then use the “Approve Renders” button to automatically move footage from the “incoming” renders directory to the “approved” renders directory, where compositors can use them. Compositors can keep groups of renders linked together easily for version control, and use the auto-comp as a starting point for breaking out channels in their comps.

A generated Auto-Comp, with the interface panel on the right.

A generated Auto-Comp, with the interface panel on the right.

The pipeline is built to be as modular as possible, so each piece doesn’t necessarily need to know what the other pieces are doing in order to function. A few core scripts automatically set the user’s workspace when files are opened or saved, and a couple optionVars maintain the current user and project folder. Everything else can be derived from the workspace, so the tools can be used piecemeal in a pipeline (for the most part) if necessary. Most configuration is done in a single settings.py file, which can be edited to configure the pipeline for just about any server setup. The goal was to make a pipeline that could be entirely feature-complete if necessary, or to have the tools operate individually as a layer on top of an existing pipeline.

Sorry about the big long bragpost, just wanted to have a place to document what I’ve been spending all these months working on since it’s hard to put something like this in a demo reel!

Apr 092015
 

I’ve been working for the last few months on a pipeline system for Wolf & Crow here in Los Angeles. My efforts are typically focused on Maya pipelines, which I’ll be documenting later, but I wanted to show off a little something I made that’s usually outside my repertoire… web programming.

The company needed an interface to handle mundane tasks like creating new jobs and assigning job numbers to them, creating new assets, and creating new sequences & shots for projects. Normally this would be as easy as duplicating a template folder, but the pipeline here makes extensive use of symbolic links in order to tie multiple Maya workspaces together in a way that’s as efficient as possible (for example, the “sourceimages” folder of every Maya project in a job can be symlinked to a single “textures” repository). Symlinks can’t be created in Windows, although Windows will happily follow symlinks served from a Linux server. So the bulk of the work has to be done on the server itself via Python or Bash scripts.

Prior to the web app, the standard procedure was to run plink.exe (an SSH command-line utility) inside a Python script using the subprocess module (see my earlier post on subprocess), which would pass a command over to the Linux file server to create all the symlinks. Or you could just SSH into the server using putty.exe and run the command yourself. This was clumsy, though, since you either needed to have Maya open to run the Python script through an interface, or you had to be comfortable enough with SSH and the Linux command line to run these scripts.

Instead, there’s a (sort of) sexy web interface that anyone can run from a browser! Here’s some examples.

Click below to see a whole bunch of code.

Continue reading »