May 302017
 

I’m going to try to make a nice easy introduction to my two favorite functions in Houdini VEX (besides fit01 and chramp of course): xyzdist and primuv. These functions are at the core of a lot of really useful and cool tricks in Houdini, including rivets, the attributeInterpolate SOP, the old “droplets falling down a soda can” effect, and some really awesome stuff with volume shaders. I’ll do a little example of each as a way of showing off what you can do with these clever little tools.

First, let’s take a look at the VEX definition (the third overload here is the most frequently used):
float xyzdist(string geometry, vector pt, int &prim, vector &uv, float maxdist)

At its most basic, xyzdist will return the distance from the sample point pt to the nearest point on the surface geometry. Note that this doesn’t mean the nearest actual point, but the interpolated surface in between those points.

Those little “&” symbols mean that this function will write to those parameters, rather than just read from them. So if we feed this function an integer and a vector, in addition to the distance to the surface, it will also give us the primitive number prim and the parametric UVs on that primitive uv. Note that parametric UVs are not the same as regular UVs… this just means the normalized position relative to the individual primitive we found.

So, what can we do with this? Click below to find out… Continue reading »

Dec 292016
 

My good friend and motion graphics bromance, Eli Guerron, asked me to help him create a procedural system of branching circuits that would look something like a schematic drawing. I stupidly thought this would be an easy trick with particles, but to get the right look it actually took quite a lot of experimenting.

15778192_1530448493636321_110072566_o

The reference video he showed me showed branching structures that would occasionally turn or split apart at precise angles. This part was easy enough to figure out, but the real trick was preventing collisions between branches. I first thought that I could use a modified version of the space colonization algorithm to create the non-intersecting patterns, but even after writing a function to restrict the growth directions to increments of 45 degrees (explained below), I couldn’t get the patterns to look right. Part of the reason the reference image looks “schematic-y” is because the traces will run alongside each other in parallel, sometimes splitting apart but generally running in groups. Space colonization just doesn’t grow that way, so I had to throw that method out.

The method that worked the best (it’s still not perfect) is just growing polylines in a SOP Solver, completely manually in VEX snippets. Each point has a starting velocity vector (this is left over from when I originally tried to solve this with particles), and on each timestep the endpoints will duplicate themselves, then add their velocity to their point position, and draw a polyline between the original and the duplicate. This forms the basis of the traces. The code for this is pretty straightforward:

// create new point, inheriting attrs from this point
int newpt = addpoint(0,@ptnum);
// assign new position to new point
vector newv = normalize(v@v) * speed;
vector newpos = @P + (newv);
setpointattrib(0,"P",newpt,newpos,"set");
setpointattrib(0,"v",newpt,newv,"set");
// add primitive to connect new point to current point
int newprim = addprim(0,"polyline");
addvertex(0,newprim,@ptnum);
addvertex(0,newprim,newpt);
// remove old point from "growth" group...
// this can be done by just setting a point attribute

The next step is to get the traces to randomly zigzag at 45 degree angles. Since I’m dealing with vectors here, rotating vectors is most easily done (IMO) via a matrix. The math really isn’t too bad, but there’s a few steps that need to happen to make this work.

First, I need to create a generic identity matrix. This is just a matrix for the point that, when multiplied against the point position (or any other vector attribute, such as velocity), returns the same vector right back. Creating an identity matrix is easy:

matrix3 m = ident();

Next, I want to generate an angle of 45 degrees. When dealing with vector math in general, you typically want radians, not degrees. 45 degrees is equivalent to π/4 radians, which I can then randomly negate to create either a positive or negative angle.

matrix3 m = ident();
float PI = 3.14159;
float angle = (PI * 0.25);
if(rand(@ptnum*@Time+seed) < 0.5) {
    angle *= -1;
}

Now I need to define an axis to rotate my angle around. Since my schematic is growing on the XZ plane, the axis to bend around would be the Y axis, or the vector {0,1,0}. Once I have the axis, the angle, and a matrix, I can use VEX’s rotate() function to rotate my identity matrix. All you have to do at that point is multiply any vector by that matrix, and it will rotate to the specified angle. Keep in mind that for predictable results, your vector should be normalized before multiplying it against a rotation (3×3) matrix. You can always multiply the resulting vector against the length of the original vector, once you’re done.

matrix3 m = ident();
float PI = 3.14159;
float angle = (PI * 0.25);
if(rand(@ptnum*@Time+seed) < 0.5) {
    angle *= -1;
}
vector up = {0,1,0};
rotate(m,angle,up);
newv = (normalize(newv) * m) * speed;
setpointattrib(0,"v",newpt,newv,"set");

The branching step is almost exactly the same. I just randomly generate a new point and rotate its velocity by +/- 45 degrees, but don’t remove the original point from the growth group. (In my example HIP, the “tips” growth group is defined by being red, while the static group is green. The “tips” group is redefined at the beginning of each step based on color.)

This setup alone gets some pretty cool patterns, but it’s messy without collision detection:
untitled

The collision detection is where things get tricky. I wanted growth points to look ahead of themselves along their velocity vector, and if they detected a collision, to try to match the collision surface’s velocity. This process repeats in a loop until the point can verify that nothing is blocking future growth. If the loop exceeds a maximum number of tries, it kills the branch by removing the point from the growth group.

The easiest way to “look ahead” and grab the velocity from collided polylines was to use the intersect() function. This returns a primitive number (or -1 if there was no collision), and parametric UV coordinates at the collision site. The primitive number and UV coordinates can then be fed into the primuv() function, which can be used to grab any attribute value at those exact coordinates. The velocity value at the collision point can then be assigned to the growth point so it moves in the same direction.

// find intersection along velocity vector.
// if an intersection is found, inherit that point's
// velocity attribute.
// repeat this up to "maxtries" to resolve collisions.
// if there is still a collision when maxtries is reached,
// remove this particle from the tips group (turn green).

vector interP;
vector forward = {1,0,0};
vector up = {0,1,0};
float PI = 3.14159;
float interU;
float interV;
int maxtries = chi("max_tries");
int try = 0;

while(try <= maxtries) {
    int primnum = intersect(1,@P,v@v*4,interP,interU,interV);
    if(primnum != -1) {
        // we collided
        // get velocity of collision point
        vector uv = set(interU,interV,0);
        vector collideV = primuv(1,"v",primnum,uv);
        v@v = collideV;
        // if this was the last try, this line has to stop
        if(try==maxtries) {
            // turn green to remove from tips on next step
            @Cd = {0,1,0};
        }
    }
    try+=1;
}

This works… okay, but because the points sometimes collide in between points that are at angles to each other, every once in a while the primuv() function will return angles that are a little “fuzzy,” meaning not at 45-degree angles. It doesn’t happen often, but when it does happen, you start seeing some weird curvy lines. So as a last step, the velocity vectors need to be locked to 45-degree angles, just in case.

First, I define a forward and up vector. Forward is +X (1,0,0), up is +Y (0,1,0). The forward vector is to have an axis to compare the velocity against… since this schematic is being drawn on the XZ plane, the +X axis is just a convenient starting point for figuring out rotations. To rotate angles on this XZ plane, I use the Y axis (up).

Figuring out the angle between two vectors is straightforward. The formula is this:

float theta = acos( dot(V1,V2) );

This returns the angle (typically called theta) between vectors V1 and V2, in radians. Because of the way this calculation works, it’s only going to properly return angles for vectors pointing towards +Z, so if the velocity has a negative Z component, we just multiply the angle by -1.

float angle = acos( dot( normalize(v@v),forward) );
if(v@v.z < 0) {
    angle *= -1;
}

Next, we need to figure out how far away our angle is from a 45-degree angle. We can do this by using the modulo operator to figure out the remainder when we divide our angle by (π/4).

float angle = acos( dot( normalize(v@v),forward) );
float rem = angle % (PI * 0.25);
angle -= rem;
if(v@v.z < 0) {
    angle *= -1;
}

Now we have a nice 45-degree-divisible angle relative to +X (or 0 radians). We need to turn this into a unit vector, which we can then multiply against our original velocity vector’s length to get the final velocity.

Check out this circle diagram to see what we’re doing here:
circle-unit-sct

Substitute Y with Z and you can already see the exact formula we’re going to use. Thanks to the unit circle, we can easily plot out where on the circle we’ll be with a little more trig. Then we just have to multiply that against the original velocity’s magnitude (length).

float angle = acos( dot( normalize(v@v),forward) );
float rem = angle % (PI * 0.25);
angle -= rem;
if(v@v.z < 0) {
    angle *= -1;
}
vector newV = set(cos(angle),0,sin(angle)) * length(v@v);
v@v = newV;

The final result is pretty cool! I still don’t like how some of the branches get too close to each other, and points can still intersect each other when two tips collide with each other on the same point, but it’s pretty close to the original setup. I’d like to eventually figure out a better way to keep the spacing between traces a little more consistent (right now they tend to sometimes bunch up tighter than I’d like), but that will have to happen later on.

Here’s the HIP file, if you want to play with it!

 

Jan 212015
 

I’m working on a production right now that involves sending absolutely enormous animated meshes (6.5 million polygons on average with a changing point count every frame) out of Houdini and into Maya for rendering. Normally it would be best to just render everything directly out of Houdini, but sometimes you don’t have as many Houdini licenses (or artists) as you’d like so you have to make do with what you have.

If the mesh were smaller, or at least not animated, I’d consider using the Alembic format since V-Ray can render it directly as a VRayProxy, but for a mesh this dense animating over a long sequence, the file sizes would be impossibly huge since Alembics can’t be output as sequences. Trying to load an 0.5 TB Alembic over a small file server between 20 machines trying to render the sequence and you can see why Alembic might not be ideal for this kind of situation.

Hit the jump below to see the solution.
Continue reading »

Sep 112014
 

One of the bigger challenges with rendering liquids is that it can be difficult to get good UVs on them for texturing. Getting a displacement map on a liquid sim can make all the difference when you need some added detail without grinding out a multimillion-particle simulation. Unfortunately, liquid simulations have the annoying habit of stretching your projected UVs out after just a few seconds of movement, especially in more turbulent flows.

In Houdini smoke and pyro simulations, there’s an option to create a “dual rest field” that acts as an anchor point for texturing so that textures can be somewhat accurately applied to the fluid and they will advect through the velocity field. The trick with dual rest fields is that they will regenerate every N seconds, offset from each other by N/2 seconds. A couple of detail parameters called “rest_ratio” and “rest2_ratio” are created, which are basically just sine waves at opposite phases to each other, used as blending weights between each rest field. When it’s time for the first rest field to regenerate, its blend weight is at zero while the rest2 field is at full strength, and vice versa.

It’s great that these are built into the smoke and pyro solvers, but of course nothing in Houdini can be that easy, so for FLIP simulations we’ll have to do this manually. Rather than dig into the FLIP solver and deal with microsolvers and fields, I’ll do this using SOPs and SOP Solvers in order to simplify things and avoid as many DOPs nightmares as possible.

Here’s the basic approach: Create two point-based UV projections from the most convenient angle (XZ-axis in my case) and call them uv1 and uv2. As point attributes, they’ll automatically be advected through the FLIP solver. Then reproject each UV map at staggered intervals, so that uv2 always reprojects halfway between uv1 reprojections. We’ll also create a detail attribute to act as the rest_ratio which will always be 0 when uv1 is reprojecting, and 1 when uv2 is reprojecting. It all sounds more complicated than it really is. Here goes…

Continue reading »

Sep 082014
 

I ran into a problem recently where I was trying to make some nice-looking embers in houdini, complete with nice motion-blurred trails. Typically with a particle system you use the velocity attribute to handle motion blur, but geometry velocity blur is always linear, so your motion trails will always be perfectly straight even if you have nice squiggly motions with your embers.

Deformation motion blur looks great, but in most simulations particles are being born and dying all the time, and deformation motion blur doesn’t work with a changing point count.

The solution is to force a constant point count. This can be problematic when your particles need to have a lifespan, so there are a few little tricks you’re going to have to pull in order to make this work…

Continue reading »

Nov 192013
 

I just wanted to post a little bit about the Creep POP in Houdini because I’ve had such a hell of a time getting it to do anything useful.

I was trying to create the effect of raindrops sliding down glass. In Maya, this is a pretty simple thing- enable Needs Parent UV on the emitter, create Goal UV attributes on the particles, and set goalU=parentU and goalV=parentV on the creation expression. The particles stick to the object, and then you can work from there.

Houdini, as it usually does, makes things a little more complicated. The Creep POP looks for two important attributes: POSPRIM and POSUV. It wants to know what primitive to stick to, and what UV coordinates on that primitive to stick to. Of course, when emitting a particle randomly on a surface, you rarely have any idea what those numbers are, so we’ll have to use some expressions to create them manually.

Here goes. On the Source POP you’re emitting from, set up a birth group (leave Preserve Group off) and call it something like “justBorn.” We’ll create the attributes only on the brand new particles so they know what their starting position should be for the Creep. Next we’ll create two Attribute POPs. The first one we’ll use to create POSPRIM, so we know what primitive on the source geometry the particle was emitted from. On the first Attribute POP, set the Source Group to “justBorn”, the type to Integer, and the name to “posprim.” The expression for the value looks like this:

xyzdist($TX,$TY,$TZ,"../path_to_emitter",-1,3)

Here’s what this is doing. xyzdist() can return a number of things about an object based on a position in space; in this case, we’re querying based on the position of each new particle, $TX $TY $TZ. We want to get values based on the object we’re emitting from, which is ../path_to_emitter. The -1 means that we just want to return values based on the nearest primitive we can find. If we used any other value, it would return values specifically from one primitive. The 3 means that we just want to return a primitive number, and nothing else.

If you check the Details View now and run your simulation a bit, you should see that each new particle now remembers what primitive it was emitted from via our new POSPRIM attribute.

Now we have to take care of POSUV, which is done in a very similar manner. On your second Attribute POP, again set the Source Group to “justBorn.” This time the type is Float and the size is 2 (since we’re dealing with a UV coordinate) and the name is “posuv.”

Here’s the expressions for this one. The first expression goes into the first Value slot, and the second expression goes to the second Value.

xyzdist($TX,$TY,$TZ,"../path_to_emitter",$POSPRIM,1)

xyzdist($TX,$TY,$TZ,"../path_to_emitter",$POSPRIM,2)

It’s almost exactly the same expression every time… the difference between these expressions and the first one is that now we’re looking for U and V coordinates for a specific primitive based on our location. Since we have POSPRIM to tell us which primitive we should be sampling for every point, we can use the values 1 and 2 at the end of the expression to return U and V coordinates, respectively, based on our position. Now every particle knows what primitive it was emitted from, and the UV coordinates it was emitted from exactly. Check the expression help for xyzdist() to get a better idea of what kinds of things you can return from this expression.

Now you can just drop down a Creep POP and use the default local variables $POSPRIM and $POSUVU/V for the Prim Number and Prim UV. If you want to use forces to drive the particles, set the Behavior to Slide and use whatever force you want to push the particles along the surface. It’s not so bad once you do it a few times; it’s just too bad this behavior doesn’t have a quicker setup.

Apr 152013
 

My latest gigantic Houdini project is finally live! I was the Technical Director for this one, which also means Houdini tube effects, lighting, shading, rendering, etc. Thanks to my fellow Houdini artist, Alvaro Segura, for handling the inky effects, as well as for helping me to refine the tube generator OTL I spent so long constructing.

Feb 012013
 

Just wanted to document a dumb little problem I was having. My compositors need cameras in .FBX format, and I had a camera that had a Look At object (an aim vector) which wasn’t being considered in the export. I tried parenting a duplicate of the first camera to the original and exporting that, but of course I just ended up with a still camera.

The solution is to use CHOPs. Make a CHOP network, then drop down an Object CHOP. Select the object you want to bake as the Target Object. Then you can create a duplicate of your original object, and use this expression to drive any of the channels you need to bake:

chop("../chopnet1/objectCHOPname/channelName")

Or you can use an Export CHOP, point it to the duplicate object, and make sure the Path field includes the full list of channels you want to export (such as t[xyz] and r[xyz]).

 

Jan 292013
 

I’ve been very, very busy, which is a lame excuse for the lack of posting new things, but there you have it.

Much of my work time recently has been dedicated to building inky, nebula-like effects in Houdini. Generally speaking, when you’re trying to make that whole ink-in-water effect, you dust off your copy of 3DS Max, do a quick fluid simulation, advect a bunch of particles through it, and then cache out a bazillion partitions of your simulation with different random seeds, then render in Krakatoa. Well, I have none of those things, and I needed greater control over the simulation than what can typically be achieved in Fume/Krakatoa, so I tried to do it in Houdini. I definitely have a lot of respect for the Krakatoa renderer after a week spent on this effect. This stuff is hard! Hit the jump to see exactly how I went about this…

Continue reading »

Sep 292012
 

I’m working on a project right now that involves exporting cached geometry from Houdini to Maya. The Alembic node makes that a fairly painless process now that Houdini and Maya both support Alembic import/export, although it turns out that getting any data other than point positions and normals is kind of a hassle. I tried renaming the Cd point attribute in Houdini to all kinds of things in the hopes that Maya would recognize the data, but Maya wasn’t having any of it. That’s when I checked out the script editor in Maya a little more closely and saw this:

// Error: line 0: Connection not made: 'output_AlembicNode.prop[0]' -> 'subnet1.Cd'. Data types of source and destination are not compatible. // 
// Error: output_AlembicNode.prop[0] --> subnet1.Cd connection not made //

Maya is creating this “Cd” attribute on the mesh, but it has no idea what to do with the data so it throws an error. The Alembic node, though, still contains that data, as the 0th index of the array “prop.” Now all you have to do is get that data from the Alembic node onto the vertex color somehow…

The SOuP plugin for Maya has the answer. There is a node called “arrayToPointColor” that will read an array of data and apply it to the point color of the mesh it’s connected to. Create the arrayToPointColor node, and feed it your geometry (mesh.worldMesh[0] –> arrayToPointColor.inGeometry) and then feed it your data array from the AlembicNode (AlembicNode.prop[0] –> arrayToPointColor.inRgbaPP). If you have more than one point attribute exporting from Houdini, you may want to check the script editor to make sure you know which index of AlembicNode.prop you are supposed to be connecting.

Finally, make a new mesh node and connect arrayToPointColor.outGeometry –> newMesh.inMesh. If you were to select some vertices and look at their properties in the Component Editor, you should see values attached to the “red” “green” and “blue” vertex attributes.

All that’s left to do at this point is to connect this color data to a texture that can read it. In mental ray, you’d create a mentalrayVertexColors node, and connect newMesh.colorSet[0].colorName –> mentalrayVertexColors.cpvSets[0]. If you don’t see a colorSet[0].colorName property on your mesh, try selecting the mesh, then go to Polygons > Colors > Color Set Editor. You should see a colorSet1… just select it, click “Update” and you should have the property you’re looking for. Then connect the mentalrayVertexColors node to any shader. See Fig. 1 for the example network.

Fig. 1: Node Editor network connections to link Alembic attributes to vertex color.

You can also just remove the middleman entirely at this point, and delete the original shape node. Then connect the AlembicNode.outPolyMesh[0] to arrayToPointColor.inGeometry. This is probably a good idea if only because it will stop Maya from throwing annoying errors every time you select the geometry because of that missing “Cd” connection.