With the signal-to-noise ratio of the internet at an all-time low, here are some fairly recent graphics-related blog posts from across the world that I think contributed positively to the SNR. Just in case you missed them the first time around.
Jim Tilander is one of our programmers at Sony Santa Monica and in Comparing images he talks about a component of our test machine, which takes automated screen shots of test files and compares images from run to run in order to e.g. catch us accidentally breaking the renderer (shaders, blending, etc), the animation system, or similar.
Jim only alludes to it at the end of his article, but almost the very first day the system was up it helped catch a subtle bug where the gamma correction had just been broken in the tools. Visually the images looked almost the same, so had we not had the test machine up and running we would probably not have noticed gamma being broken for several weeks (if indeed ever).
Another good post on testing graphics code can be found at Aras Pranckevičius’ blog.
Angelo Pesce has a nice (though much too dark Angelo) graphics blog where he’s posted some astute observations, including paying attention to state in LOD’ed shaders in How to properly LOD pixel shaders, and to beware wasting pixels when drawing alpha-tested stuff in Hidden overdraw.
Tom Forsyth reminds us that Premultiplied alpha makes blending associative, so we can composite translucent objects (like e.g. particles) front-to-back instead of back-to-front. Timothy Farrar, in the blog for his awesome-looking Atom project, describes his experiments with front-to-back drawing in Drawing in reverse and Drawing in reverse II. Tim also talks about how front-to-back drawing could allow for overdraw reduction in Transparent Rendering Pass :: Overdraw Reduction. (I’m curious to hear if anyone’s tried something like this already.)
Both Nick Porcino and Jeremy Shopf nicely summarized several approaches to screen space ambient occlusion. I like the utter simplicity of Mike Pan’s approach as implemented in Blender.
It’s worth pointing out that what we’ll never get with SSAO is the occlusion of large features, such as creases in mountain ranges, so you still need some other mechanism to account for such occlusion.
Like SSAO, deferred rendering has been all the rage recently, and it’s being used to great effect in Killzone 2, which is looking absolutely gorgeous. Deferred rendering is an approach where you typically render out in a first pass one or more buffers containing geometry information (e.g. positions, normals, diffuse color, lighting information, etc). A second pass reads these g-buffers, performs lighting calculations, and writes to the frame buffer.
Different variations on this exist. As summarized here, Damian Trebilco suggests rendering only light indicies in a first pass and then performing standard forward rendering in a second pass, reading the light index texture built in the first pass.
Wolfgang Engel suggests a method somewhere halfway in-between “standard” deferred rendering and Trebilco’s method.
I’m sure there was more signal in the noise. What did I miss?
- None Found
- My recommended books