Skip to content

Chris Judkins - Visual Effects Artist Posts

What to put in your reel

On the Realtime VFX forums, user Samulon asked in regards to VFX reels:

Is it better to “show off”, go over the top with the effect or should I show that I understand the gameplay restrictions, (…)?

I had strong opinions on this, and my answer followed as such:

Please please please show stuff in context. Gameplay is king. If you want to make VFX for games, why show VFX with no game? That’s not to say that breakdowns aren’t helpful, but I really want gameplay stuff (or environment or character or whatever kind of effects you specialize in) to take priority.

There’s lots of reels with anime-style effects with really exaggerated charge-ups, or cinematic style explosions with tons of camera shake that linger, or weapon trails that look like a total lightshow, and – don’t get me wrong, that stuff is dope and a lot of fun to make – but it tells me nothing about whether you’re able to make a good contribution to our team.

I get really impressed when I see someone tell a story with an effect that’s only half a second long. Show off by making something that’s punchy and appealing without sacrificing gameplay readability. Just because something is loud or “flashy” doesn’t mean that it’s good, it needs to serve its purpose and fit in its context. Your reel is the only place where you can prove that you fully understand this balancing act.

Moving to CentOS

I finally bit the bullet and switched my workstation over from Windows 10 to CentOS 7. It’s something I’ve been curious about doing for a long time, but game development tools being heavily reliant on Windows has kept me from doing it. However, now that both Unreal and Unity have stable Linux builds, I don’t really have any excuses anymore!

For those unaware, CentOS is commonly used operating system within the VFX industry. Being able to change how things work under the hood makes it easier to squeeze the most out of your hardware, and have all your DCC be more deeply integrated with the OS. The CentOS distribution is popular mainly due to its stable nature (and maybe its lack of unforeen forced updates). Because of this commitment to stability, it will only update its software to newer versions if deemed absolutely necessary – something which can count as both a blessing and a curse. Getting the latest and greatest of any given software is of course always possible, but might be somewhat of an involved process.

I’m quite surprised at how well-supported Linux is these days – not only in terms of the heavy-duty work apps like Houdini and Substance, but convenient utilties such as Spotify, Simplenote, Slack, etc.

I wonder how long it’ll be until we see a wholly VFX-oriented distribution? Get on it, VES!

Houdini Quick Trick: Rendering Odd Frames

I’ve been dealing a lot with authoring flipbook textures in Houdini lately, and I wanted to share a quick tidbit on rendering (and reading) image sequences that aren’t very sequential.

Most ROP output nodes in Houdini will allow you to output every nth frame by changing the rendering increment, saving both rendering time and disk space.

ROP with the Increment Parameter highlighted

ROP with the Increment Parameter highlighted

However, with this comes the issue that if you’re using the default $F variable in your filenames, you’ll end up with names such as 01, 03, 05, 07, etc. This makes it messy when you try to read these files back into Houdini for rendering or compositing purposes, or even if you’re just trying to check things out in mplay. There are of course expressions to only read back every other frame, as well as external tools to batch rename files, but I wanted to show you a way to have them written correctly from the get-go.

Let’s say that you’re rendering out every 2 frames of texture.$F4.tga, but you want it to output 0001, 0002, 0003.. instead of 0001, 0003, 0005.. We can achieve this by replacing the filename with the following:

texture.`padzero(4, floor($F/2) + 1)`.tga

ROP with the expression in the Output Picture field

ROP with the expression in the Output Picture field

Let’s deconstruct this expression:

  • padzero() is function that adds padding (leading zeroes) to a number, which allows us to emulate the 4 in $F4. The first argument is how many digits to pad with, while the second argument is the number to pad. padzero(2, 5) evaluates to 05, padzero(6, 24) evaluates to 000024, etc.
  • floor() is a common math function to round a number downwards. Since frame numbers should be integers (whole numbers), we use floor() to round results such as 64.3 down to 64.
  • $F/2, as you might guess, simply divides the frame number by 2. If you are outputting every third, seventh, etc. frame instead, you should replace 2 with the desired number.

So in practice, what happens when we’re rendering out frame 37 of our simulation is the following:

  • $F/2 will divide 37 by 2, outputting 18.5
  • floor(18.5) will round it down to 18
  • 18 + 1 is something we need since frame 1 evaluates to 0
  • padzero(4, 19) will pad the number, outputting 0019

Using expression magic, we now have beautiful filenames that behave predictably when used with other programs!

Bonus Lesson: Linking to the Increment Parameter

Let’s say that you find yourself experimenting with different increments, but you don’t want to update the $F/2 part of the expression constantly to match what you’ve entered. On most ROP output nodes, the increment parameter is referred to as f3, so by instead writing $F/ch('f3') it’ll reference whatever assigned to the Increment parameter. Proceduralism!

The tooltip will show you how to refer to a parameter in expressions

The tooltip will show you how to refer to a parameter in expressions

On Particle System Editors

One day at work, my neighbouring graphics programmer casually rolled over and asked “So we’re thinking of redoing the effects editor, do you have any thoughts on how it should work?”. The short and simple answer would of course have been to just state what’s missing and what could be improved, but instead I started thinking about the nature of effects editors, and why they look and work the way they do. I have a lot of thoughts, so be prepared for some verbosity.

As far as game art goes, effects work is absolutely one of the more technical disciplines. It strikes a razor-thin balance between classic art theory concepts such as color and composition, a sense of timing and impact often perfected by full-time animators, and lastly, a great deal of technical know-how.

Modern effects editors, in my mind, tend to focus less on the compositional, artistic, or animated process of effects authoring – instead making it a process focused on tweaking numbers. Let me make my case by quickly summarizing the major approaches we have to effects authoring:

Spreadsheet Editors

Spreadsheet editors are what we’re given in most tools currently. Unity, Unreal, Frostbite, and who knows how many other engines have all opted to give us an interface where you’re presented with a number of fields where you can type in numbers, and thus author your effects. Some of these editors allow for advanced features such as supporting expressions or animating values over time, some of them provide a preview that updates in realtime while others require resaving and rebuilding after each change, but in terms of interaction they are all largely the same.

Node Editors

While still a relatively rare sight in realtime applications, node editors are definitely on the rise. They have a storied and proud heritage in offline applications such as 3ds Max, Softimage, and Houdini. The mythical beast Niagara has certain node-based components, but I’ll withhold judgement until it’s been properly released. Frostbite also implements a graph-based system for authoring GPU particle systems. As with shader graphs, they are doing a strong effort of making the authoring of effects less text-based and more interaction-based, but the concepts and vocabulary is generally identical.

Script Editors

Script Editors refer to systems such as PopcornFX or what was used in Infamous: Second Son where particle behavior is primarily driven through scripts and expressions. This approach is powerful and flexible, and lends itself well to people comfortable with programming. But for all the freedom and power they offer, I would consider these editors the weakest in terms of intuitivity and artist-friendliness.

The Problem

My issue with the above approaches all the assumptions from which their foundation is built. While they differ in terms of iterability and intuitiveness, they all originate from an assumption that effects work is a technical undertaking. It’s not simply a matter of making tools more “artist friendly” (whatever that means), it’s a matter of changing our vocabulary to something less abstract and mathematical. Let’s imagine a world where effects authoring is closer to 3D sculpting or character animation rather than programming. Some ideas of what we’d potentially see:

  • The above approaches all have a focus on text entry, but rather than typing coordinates and dimensions for emission volumes, why not just place and scale them using the same manipulation tools we use for other 3d models? Speed and direction could be represented as arrows or vector fields, and adjusted directly (similar to what we see in Impromptu Vector Field Painter). Trajectories could be drawn directly in the viewport as splines without the hassle of modulating XYZ velocities over life.
  • Particle systems treat its content as points in space with sizes and velocities, no matter what they’re trying to mimic. Fire is not defined by its heat, and rock is not defined by its mass. This creates a dissonance when we at the same time insist on defining a constant downward force as gravity or define wind speeds in physical units such as m/s. There is a discussion to be had in regards to how we can define and achieve physically correct visual effects in games, but that would easily warrant its own dedicated post.

The above examples illustrate what happens when we approach effects work as something tangible. When we describe particle systems by what we’re trying to create rather than describing them as point positions and spawn properties. Not only would these systems be more approachable by artists with non-technical backgrounds, we would achieve a workflow that is more iterable, more art directable, and most importantly, more stimulating to work with.

It’s of course easy to be opinionated when I’m not in charge of the implementation, but I maintain that it’s an important exercise to second-guess the nature of the tools we use on a daily basis. I would like for effects artists and engineers to spend some time reevaluating our assumptions regarding effects authoring, because the notion that effects authoring is a technical exercise rather than an artistic one is informing the way our tools are being designed. The prevalence of the spreadsheet approach is is typical and disheartening example, since it demonstrates a lack of imagination in terms of how we see artists authoring effects. I believe we can change the way we work with effects systems without having to change a single line of backend code – it is just our interaction with the systems that need rethinking.

I can’t wait to see new paradigms for effects authoring be invented, and discover what we’ll be able to create at that time.


Hello and welcome to my site! For years my website consisted of little else than a reel and some screenshots, but I feel that it’s time to start returning more to the community.

On this blog I’ll repost links and articles that are of interest to people persuing the digital arts. I’ll also make sure to contribute a few original tutorials and articles of my own.

Thank you for visiting!