Graphical shader systems are bad

It is very easy for programmers to be seduced by pretty tech. Case in point: graph-based shader systems. Unless you’ve been sleeping under a rock, you must have noticed that these systems have become haute couture in the rendering world. Just so we’re clear, what I’m talking about are systems where you can construct shaders in a graphical UI, stringing together Lego-like building blocks of shader code snippets, with connecting lines. Like this:


It’s possible the trend started with Unreal’s Material Editor (featured above), but Epic Games certainly didn’t invent these systems. This functionality has been around for a long time in various software packages, like e.g. Maya’s hypershade. You can find several different graph-based systems neatly illustrated here.

These systems are certainly very pretty, they’re absolutely cool tech, and they quite powerful in that you can basically piece together any type of shader functionality you would like. But, therein lies the problem, and why I think they’re absolutely the wrong way to go, at least as long as performance still matters. You see, the problem is that these shader systems are full-fledged graphical programming languages, and exposing a programming language, whether graphical or not, to a nonprogrammer is rarely the right thing to do. There are exceptions, of course, such as scripting languages for designers, but even there the exposed functionality should be as limited as possible. (Yes, programmer guy, this means that exposing LUA to your designers is bloody retarded on your behalf.) However, when we’re talking something as performance sensitive as fragment and vertex shaders, it’s outright criminal to have nonprogrammers do the programming.

Those who know me might think this is a departure from my usual “empower the user”-spiel, and since subtle points are usually lost on people, let me hammer this point home. Don’t get me wrong, graphical shader systems are great for offline rendering (film), for rapid prototyping, or other scenarios where iteration time is much more important than runtime performance. For games, however, where we ultimately have to meet a frame rate goal of 30 or (ideally) 60 frames per second (or 20, if you’re crap), there is simply way too much performance that will go lost in a system like this.

At the end of the project when you truly notice how out of hand the shaders have become, how do you reel things back in to where they should be? Not very easy when there are custom graphs (read custom code) for every single material in your game (of which you have many hundreds). How do you provide shader LOD by dropping features when there really aren’t any predefined features (as e.g. someone could have reimplemented that parallax-mapping “box” behind your back, so removing the box doesn’t do you any good)? Just to name a few issues.

So Christer, you say, what’s the alternative? What do you use, huh!? Well, though it is hardly without problems of its own, we opted for an übershader approach for our engine. By this I mean that we have a small number of artist-selectable shaders, which each contain a large number of code branches corresponding to features (whether to do environment maps, parallax mapping, emissive, etc). Behind the scenes (in the tools) we create permutations of these top-level shaders depending on checked features as well as on other variables, to generate branchless shaders that actually end up in the game and that run “at speed.”

The fact that we use an übershader isn’t important though, we could equally well be gluing code pieces together similar to what happens inside one of the graphical shader tools. What is important, however, is that our supported features are effectively being exposed to the artists as checkboxes (and a set of associated controls, like sliders, color pickers, etc), not as a do-whatever-you-want interface.

To be clear, our Maya-based system is very unsexy (in fact, our artists would probably happily attest to it being outright ugly), but its salient features are that people can get things done using the system and, most importantly, this system allows us to “uncheck checkboxes” in the tools to implement shader LOD or to scale back overuse of features that in the end turned out to be much too expensive to support. (BTW, for any SCEA SM artists reading this: we want to revamp that Maya interface eventually, honest!)

You rarely hear people say bad things about these graphical shader systems, but next time you’re at GDC and have the opportunity to meet up with a bunch of developers who’ve used a graphical shader system, ask them how much work they spent on cleaning up shaders at the end of the project. The stories I’ve heard haven’t been pretty.

Today’s moral: don’t make nonprogrammers code!

Similar Posts:

Be Sociable, Share!


  1. james.bird said,

    August 3, 2008 @ 4:43 am

    I totally agree. I once worked at a games studio where we would automatically generate shaders from maya hypershade networks. The artists loved it because whatever they created in maya would look the same in game (or close enough), and it gave them 100% flexibility. However, we once had a case where an artist created the “mother of all crates”. Not only did the crate look the same in wireframe as it did normally, it also contained 20 unique maya materials which were subsequently converted into 20 unique shaders!

  2. nAo said,

    August 3, 2008 @ 5:16 am

    Good post Christer, I couldn’t agree more, in fact we used a very similar approach on Heavenly Sword.
    Except a very small number of special shaders (skin, velvet, snow, etc..) the vast majority of shaders used in game were automatically built upon three basic materials (lambert, phong and metallic) with up to three diffuse/albedo texture layers (blending ops between those layers were selectable as well) and a bunch of extra features (as uv animations, normal/parallax mapping, etc..)
    On a per level basis an offline tool was determining which shaders were really needed and optimized Cg code for the aforementioned combinations was automatically generated, compiled and stored a in shaders dictionary.

    While we tried to satisfy most of the content creators requests having such an approach allowed us to closely control the quality of shaders making sure we were not supporting unnecessary features and had fairly optimized shader code at the same time.

    It’s probably not a dreams system for artists&co. but at that time (in the end we didn’t know the hardware that well!) it sounded like the most sensible thing to do, and luckily enough our artists were absolutely great at using it and they pulled out some amazing stuff :)

  3. vince said,

    August 3, 2008 @ 1:04 pm

    I think the philosophy of keeping programming problems out of non-programmers’ hands, but I don’t think the tradeoffs are quite as clear-cut in this instance.

    I’ve shipped a 360/PS3 title using a graphical shader system similar to what you describe, and it really wasn’t a problem on that project. It does require an adjustment on the behalf of your artists and programmers, though.

    My advice for people using these type of systems is to give guidelines and set limits where possible early on for the artists. Additionally, work early with your technical artists to create a base material library. You should only have a handful of artists creating materials, and the rest of your artists just override textures, colors, and other pluggable parameters. Think of it like an expandable ubershader system, where your tech artists are the ones making the base shaders.

    The advantage of the system is we were able to pull off quite a bit of custom material effects without any large amount of programmer time, and with a more constrained material system we just wouldn’t have had the programmer resources to do it.

    I wouldn’t say it is the One True Way to go, but I don’t think it should be conclusively ruled out either.

  4. LeGreg said,

    August 3, 2008 @ 3:13 pm

    Some graphical shader conception systems tend also to generate a lot more shaders. Some recent games on PC were creating 10,000+ pixel and vertex shaders before anything was drawn. I’d have to guess that’s because of their shader conception tools and the fact that they didn’t constrain themselves. That’s a noticeable stress on the system (runtime + driver, loading for the first time, switching resolution). Of course a change in the programming model (more subroutine calls) could help but we have to do with what we have until then..

  5. berkbig said,

    August 4, 2008 @ 2:35 am

    hmmm, bit disappointed in this as an article really – you could write it the other way around and the downsides would be as big or bigger. You fail to list the disadvantages of the other approach:

    1) Suddenly your artists need 4 textures not the 3 you’ve allowed – its not really a big deal performance wise but you suddenly need a programmer involved for the art team to continue – in modern studios where artists outnumber programmers to a serious degree this seems pretty foolish
    2) Most of your checkboxes are probably incompatible in some way or interact to give unexpected results – how do you explain that to the art team

    and i have issues with several of your other arguments too:

    1) Your performance argument is flawed because its just as bad in both cases – in either case you need a way for the art team to see which materials are too expensive and do something about them – whether you really think that twiddling checkboxes is better than swapping shader nodes may depend on your art team.
    2) I also fail to see your shader lod argument – this stuff is just as easy to do in a graph environment as it is in a checkbox environment, it just depends how good your graph compiler is.

    just to be clear, we don’t have a seperate tool for this stuff, we just use the shader graph as implemented in your DCC tool of choice (Maya/XSI in our case) which is how the artists like to work, and frees our programmers to do more interesting things instead. This works pretty well for us, and yes its possible for the artists to make bad materials, but that is always true in any sufficiently general system.

    I think what i mostly wonder though is how you cope with the fact that your artists hate you – surely knowing that, and that you specifically chose it is pretty sucky…

    but in the end i guess you’re right, writing a neat, future proof, general elegant solution to a problem (with the attendant performance issues) is never the right long term solution to one of these problems, i mean we all still punch our code out on bits of cardboard and hand them in to the secretarial staff for processing right…



    P.S apologies for the rant – seems like you hit a nerve ;)

  6. bionicbeagle said,

    August 4, 2008 @ 8:58 am

    You’re making a judgement based on the assumption that the one who authors the shaders doesn’t know what (s)he is doing.

    To avoid that problem you would assign the task of creating the shaders to someone who knows what they are doing, no?

    In the system you reference (Frostbite) there are mechanisms to allow Technical Artists or Rendering Programmers to author base shaders which are then used by the object artists. The object artists may augment the shaders with extra nodes, but the base functionality is controlled by technical staff.

  7. StephenTC said,

    August 4, 2008 @ 10:50 am


    While I think your argument on performance is debatable, I think you’re missing a key factor as to why graph shader systems are a viable and desirable for current and future game development.

    Programmers are programmers and artists are artists. Giving artists anything new is always a dangerous idea, but inevitable. Programmers are expensive and writing shaders can be time consuming. The shaders that programmers write limit artists’ capabilities for creating what they ultimately intend to display. The more freedom we put into our artists hands boosts the creative potential for any game.

    Performance loss is a notable reason for staying away from technologies like this, but as computing power grows, this argument shrinks and shrinks. If we were really concerned about performance we would use assembly instead of C, and C instead of C++ for games. We don’t because C/C++ is much faster to develop for, and much more design intuitive.

    Finally, and most critically, we can look at real world examples of games that use graph based shader systems and realize that they are among some of the most successful games of this generation. (Notably, every UE3 game uses it, and many are VERY successful).

    While I think that the alternative you mentioned is actually a much simpler process for managing shaders, I think that staying away from graph based shaders for fear of the artist is possibly archaic thinking.

  8. nAo said,

    August 4, 2008 @ 9:53 pm


    1) That’s certainly an issue (for example I had to say no to an artist asking for a water shader, as I simply didn’t have the the to write/test/debug it, he ended up using a custom snow shader to render water, it looked brilliant :) ) but it’s a good price to pay if you are after performance.
    Another, perhaps involuntary, side effect of this method is that it gives some sort of consistency/overall direction to your image and how it looks, which is imho very important.

    2) Not a big deal, tools have to be smart and only expose what makes sense with ‘current’ settings. If the artist is working on material that performs lighting at vertex shader level there’s no need to show normal maps settings to the content creator.

    1) Completely disagree on this point: you can easily show to the artists how expensive is a shader (for example associating colours to it..), this doesn’t mean the he/she knows where the issue is and how to fix it. My preferred method is to sit down with the artist or the art director and to decide what features set we really need so that we can come up together with an implementations that satisfies him/her (it looks good, it’s easy to control, etc..) and me (it’s fast)


  9. berkbig said,

    August 5, 2008 @ 1:51 am

    Hey marco,
    I think you’ve taken issue with a point i didn’t make there (or perhaps made but didn’t intend to). Obviously artist education is a big deal in this space and as you say somebody sitting down with the artist in question or their art director is a good way of doing this. My point was just that whichever of the 2 opposing systems you prefer this will be an issue and i can see no reason why it should be easier to train artists to use a bunch of (in my experience) pretty arbitrarily named checkboxes properly than to teach them to use a shader graph. I base this on the fact that material setup in most DCC applications (or the ones we use) is already shader graph based, and has been for 10 or 15 years. Surely leveraging that prior knowledge to help your education process must be a good idea…?



  10. JohnOKane said,

    August 5, 2008 @ 4:56 am

    I generally agree with the post. Exposing languages graphically doesn’t make the problem domain any less complicated, it just makes it easier for non-experts to affect the solution. If you have some who knows their way around graphical languages like the maya hypershader and thinks performance, then that is a solution – but it’s a rather specialist role. I’d consider layering another interface on top with check boxes and limitations (or bypass developing the hypershader in the first place and have a specialist programmer who liaises with art full-time).

    Phone companies went down this route 15 years ago and they thought making everything block diagram like and graphical would be the best thing ever – less skilled technicians would be able to setup phone switches in a jiffy and maybe even customers could fix problems. But the problem domain was still as complicated as it ever was, regardless of whether it is represented in text or icons. It turned out to be a bit of a bubble in terms of promised gains.

  11. Jon said,

    August 5, 2008 @ 8:29 pm

    Well, I gotta say I agree with quite a number of people here. There’s really nothing inherently wrong with using a graph/node based solution. You just either have to
    a) greatly limit those who are able/allowed to author new shaders to trusted people who are trustworthy, responsible and not likely to try to sneak stuff by. You’re the growing library of base shaders are used by the other artists to author their content.
    b) Give the artists good metrics to understand the performance (in a meaningful/ practical way) and consequences of the content they’re authoring. Frankly, there’s no excuse not to attempt to do this regardless of what you’re trying to do.
    I’m currently working on a 60 Hz title where we’re utilizing a graph based shader editor and we really don’t have GPU performance problems. The artists’ pipeline, while hardly perfect really seems to work pretty well. While a lot of the artists did initially object to being restricted from just following their hearts’ desire they rather quickly got used to it and now that they fully realize the options and effort required to author various effects efficiently seem to generally be almost grateful they don’t have to author stuff.
    A lot of it really comes down to discipline.
    And, as others have pointed out, the Ubershader model doesn’t really guarantee anything either unless you balance performance against the worst case construction of the Ubershader (which would likely heavily penalize your game). Unfortunately, all solutions generally require the assumption that at some of your people know what the heck they’re doing.

  12. Visual Scripting Languages | .mischief.mayhem.soap. said,

    August 6, 2008 @ 1:13 pm

    […] an interesting entry on Christer’s blog about graphical shader systems. It reminded me of another issue that […]

  13. kenpex said,

    August 7, 2008 @ 2:28 pm

    I think you’re right, and I will push it further…

    First of all, shaders are performance _critical_, they are basically a tiny kernel in a huuuge loop, tiny optimizations make a big difference, especially if you manage to save registers!

    The ubershader approach is nice, in my former company we did push it further, we had a parser that generated a 3dsmax material plugin (script) for each (annotated) .fx file, some components in the UI were true parameters, others were changing #defines, when the latter changed the shader had to be rebuit, everything was done directly in 3dsmax, and it worked really well.

    But this is not enough! First of all, optimizing every path is still too hard. Moreover, you don’t have control over the number of possible shaders in a scene. Worse yet, you loose some information, i.e. let’s say that the artists are authoring everything well, caring about performance measures etc. In fact our internal artists were extremely good at this. But what if you wanted to change all the grass materials in all your game to use another technique? You could not, because the materials are generic selections of switches, with no semantic!

    That’s why we intended to use that system only as a prototype, to let artists find the stuff they needed easily and then coalesce everything in a fixed set of shaders!

    In my new company we are using a fixed sets of shaders, that are generated by programmers easily usually by including a few implementation files and setting some #defines.

    I want to remark that the coders-do-the-shaders approach is not good only because performance matters. IT IS GOOD EVEN ON AN ART STANDPOINT. Artists and coders should COLLABORATE. They both have different views, and different ideas, only together they can find really great solutions to rendering problems.

    The idea of having the coders do the code, wrap it in nice tools, and give tools to the artists is not only bad performance-wise, it’s bad engineering-wise (you most of the time spend more resources into making and maintaining those uber tools than the one you would spend by having a dedicated S.E. working closely with artists on shaders), and it’s bad art-wise (as connecting boxes has a very limited expressive power).

  14. kenpex said,

    August 7, 2008 @ 2:38 pm

    Post Scriptum:
    To deal with incompatible switches, in my system I had annotations that could disable switches based on the status of other ones. It was really really easy. I did support #defines of “bool”, “enum” and “float” type. The whole annotated .fx parser -> 3dsmax material GUI was something like 500 lines of maxscript code.

  15. // Write something witty here » Blog Archive » Christer Ericson on Shader Systems said,

    August 19, 2008 @ 2:16 am

    […] Christer Ericson, author of the tremendous book Real-time Collision Detection, talks down on Shader Graphs in a recent blog entry. […]

  16. Real-Time Rendering » Blog Archive » Interesting bits said,

    October 2, 2008 @ 8:43 pm

    […] avoid logrolling in this blog, but did want to mention enjoying Christer Ericson’s post on graphical shader systems. I have to agree that such systems are bad for creating efficient shaders, but these tools do at […]

  17. RTT said,

    October 31, 2008 @ 2:55 pm

    We also shipped two games (both multiplatform PC/PS3/360) using a graphical shader editor, we had no problems with it producing good results and performance. Generally shaders were designed by technical artists and programmers, otherwise they would need approving through a TA. I wouldnt write off these graph editors without looking at some sucessful systems, they can be designed in more than one way.

  18. Lamont said,

    December 26, 2008 @ 9:31 pm

    If you get to the end of a project and see that there are 30 different versions of a “prelit diffuse with normal” shaders, then your problems are with the Art Director and Lead Programmer for letting it slide till the end. The artist are just making art, the leads need to be on that to make sure that artist use pre-determined shader instances and if they need a new shader bring it up with the Art Director AND lead Programmer to evaluate if it need to be added.

    There is nothing wrong with these systems if people mind what they are doing.

  19. christer said,

    December 28, 2008 @ 11:00 pm

    Lamont (and others who also didn’t get it)… the key point of the post is that you need to control shaders very closely, as they are the most performance-critical code we have today. You do this by not letting users author shaders on their own. I suggested one alternative solution (a “checkboxed” übershader) but exposing a small number of predetermined shaders is, of course, also a solution. (In fact, that is the traditional, conservative, option.)

    If you decide to author these predetermined shaders using a graphical interface, used by a programmer or programming-savvy tech artist, that is an entirely different issue and not one I object against. And anyone proficient in the English language should have been able to read that out of what I wrote above.

  20. Talk about graphical shader system « Alch3n’s Blog said,

    April 4, 2009 @ 5:31 am

    […] like Christer Ericson and Wolfgang Engle think shaders are most performance critical for a game, and it’s wrong to […]

  21. - the blog » Catching up (part 2) said,

    June 8, 2009 @ 1:52 am

    […] joins the crowd who think graph based shader editors get a little too much […]

RSS feed for comments on this post · TrackBack URI

Leave a Comment

You must be logged in to post a comment.