A brief graphics blog summary

With the signal-to-noise ratio of the internet at an all-time low, here are some fairly recent graphics-related blog posts from across the world that I think contributed positively to the SNR. Just in case you missed them the first time around.

Jim Tilander is one of our programmers at Sony Santa Monica and in Comparing images he talks about a component of our test machine, which takes automated screen shots of test files and compares images from run to run in order to e.g. catch us accidentally breaking the renderer (shaders, blending, etc), the animation system, or similar.
 Jim only alludes to it at the end of his article, but almost the very first day the system was up it helped catch a subtle bug where the gamma correction had just been broken in the tools. Visually the images looked almost the same, so had we not had the test machine up and running we would probably not have noticed gamma being broken for several weeks (if indeed ever).
 Another good post on testing graphics code can be found at Aras Pranckevičius’ blog.

The latest chapter in the never-ending saga of shadow mapping is Layered Variance Shadow
Maps
(as described by Andrew Lauritzen on the Beyond3D forums).

Angelo Pesce has a nice (though much too dark Angelo) graphics blog where he’s posted some astute observations, including paying attention to state in LOD’ed shaders in How to properly LOD pixel shaders, and to beware wasting pixels when drawing alpha-tested stuff in Hidden overdraw.

Arseny Kapoulkine talks about saving bandwidth and pushbuffer memory usage in your particle system using the vertex stream frequency divider feature in his Particle rendering revisited post.

Tom Forsyth reminds us that Premultiplied alpha makes blending associative, so we can composite translucent objects (like e.g. particles) front-to-back instead of back-to-front. Timothy Farrar, in the blog for his awesome-looking Atom project, describes his experiments with front-to-back drawing in Drawing in reverse and Drawing in reverse II. Tim also talks about how front-to-back drawing could allow for overdraw reduction in Transparent Rendering Pass :: Overdraw Reduction. (I’m curious to hear if anyone’s tried something like this already.)

Both Nick Porcino and Jeremy Shopf nicely summarized several approaches to screen space ambient occlusion. I like the utter simplicity of Mike Pan’s approach as implemented in Blender.
 It’s worth pointing out that what we’ll never get with SSAO is the occlusion of large features, such as creases in mountain ranges, so you still need some other mechanism to account for such occlusion.

Like SSAO, deferred rendering has been all the rage recently, and it’s being used to great effect in Killzone 2, which is looking absolutely gorgeous. Deferred rendering is an approach where you typically render out in a first pass one or more buffers containing geometry information (e.g. positions, normals, diffuse color, lighting information, etc). A second pass reads these g-buffers, performs lighting calculations, and writes to the frame buffer.
 Different variations on this exist. As summarized here, Damian Trebilco suggests rendering only light indicies in a first pass and then performing standard forward rendering in a second pass, reading the light index texture built in the first pass.
Wolfgang Engel suggests a method somewhere halfway in-between “standard” deferred rendering and Trebilco’s method.

I’m sure there was more signal in the noise. What did I miss?

Similar Posts:

Be Sociable, Share!

10 Comments »

  1. DanG said,

    April 14, 2008 @ 2:41 am

    Great post Christer.

    In regards to Tom’s pre-multiplied alpha I remember at about the time that he posted that I was reading the same in Jim Blinn’s Dirty Pixels book. Jim presents the same blend modes as Tom does and outlines how they’re need for correct filtering of texels that contain alpha. Finally a method to get rid of the horrid black borders on, so called, “Cookie Cutter” textures.

    cheers
    DanG

  2. christer said,

    April 14, 2008 @ 8:41 am

    Thanks Dan. Right, as Tom points out, the concept of premultiplied alpha isn’t new. In addition to Tom’s article I would also recommend reading the original article by Porter and Duff (of Duff’s device), as well as chapter 16 of Blinn’s “Dirty Pixels.” All three are excellently written, and highly readable. Blinn is one of my favorite educators of all time, and he’s high on my recommended reading list (I should probably update the list one of these days, huh).

  3. Aras Pranckevicius said,

    April 14, 2008 @ 8:55 am

    “I’m sure there was more signal in the noise. What did I miss?”
    Hey, you missed all the noise!

  4. TimothyFarrar said,

    April 15, 2008 @ 2:54 pm

    Awesome post. I probably should add that after more testing of “front-first” particle drawing under larger amounts of overdraw, that I did indeed find that using stencil to limit overdraw provided some quite good gains, and just as important provides a good bound on worst case performance. One thing I haven’t tried yet, was to attempt to adapt to frame rate by controlling the stencil limit dynamically. In my case I use a maximum of 16 times overdraw but could scale back to 8 times overdraw maximum if frame rate drops. Another trick I use (since my pipeline uses only transparent billboards) is to always draw a blurred version of the previously drawn frame at 100% as the “skybox” which fills in areas which didn’t get enough overdraw to fully fill.

  5. pixelame.net said,

    April 18, 2008 @ 11:54 am

    realtimecollisiondetection.net: el blog…

    La detección de colisiones en tiempo real siempra ha dado que hablar, ya desde la era arcaica de los juegos 2D. Este blog se dedica íntegramente a ese tema….

  6. kenpex said,

    April 20, 2008 @ 11:55 pm

    Ok ok got it, I’ve updated my site to be more reading-friendly. Hope that helps.

  7. christer said,

    April 21, 2008 @ 12:24 am

    Ah, the power of the written word! :) Thanks for stopping by Angelo. BTW, here are some links that should be of interest to those who read your posts on how GPUs work: The Froggy FragSniffer (Understanding G80 behavior and performances) and decuda and cudasm.

  8. Andrew Lauritzen said,

    April 24, 2008 @ 8:02 am

    Great post! Thanks especially for the links to the SSAO stuff. There has been a ton of work on that front lately and it seems that I missed a good chunk of it. In particular, the unsharp mask stuff is kind of a clever way to look at the problem, although naturally the simplicity comes with some artifacts.

  9. kenpex said,

    April 25, 2008 @ 11:51 am

    Thanks Christer, actually I intended to add some links to CUDA stuff as they provide interesting details on G80, but then I forgot to. Problem is that I write those posts in a really unprofessional way, writing down notes and emails from work to home and most of the time the end result is messier than I wanted.
    By the way, congratulations for your blog, I read it via RSS on my winCe phone while commuting to work (shame is that rss feed does not contain full entries tho), and I greatly enjoyed your book, I’m not into collision detection but I recommend it to anyone is interested in a good introduction to linear algebra, or to spatial subdivision, or to code optimization :D

  10. realtimecollisiondetection.net - the blog » Posts and links you should have read said,

    September 2, 2008 @ 8:33 am

    […] here. A nice, large screenshot of the SC2 SSAO solution can be seen here. I posted some SSAO links before; Nick Porcino’s SSAO summary is particularly nice, with a lot of extra references in the post […]

RSS feed for comments on this post · TrackBack URI

Leave a Comment

You must be logged in to post a comment.