Of late (in particular, but really for quite some time) there’s been a lot of hype about real-time ray tracing being the graphical equivalent of the second coming of Christ. This has bothered me a lot and I felt I should write about it at some point, but it turns out I don’t have to, because Dean Calver just did a much better write up on the topic than I could ever have done. Dean’s write up is fab and it’s called Real-Time Ray Tracing: Holy Grail or Fools’ Errand? In fact, it’s so good you should read it repeatedly, until you become one with the article.
And after you’ve read it, ask yourself why Intel was afraid to answer these questions about their current real-time ray tracing endeavors.
That is truly an excellent article. I have been hearing the RTR hype for a while too and it’s nice to see a good counterpoint.
I notice some parallels between your last couple of posts, actually. A good way to describe them is with the cliche “When you have a hammer, everything looks like a nail”. Intel, for instance, wants to promote RTR to swing interest back away from GPUs towards CPUs, I guess. Other people want to promote raytracers because they are fun to write and easily make pictures of shiny spheres (disclaimer: I haven’t written one). Ok, that is kind of a cheap and I don’t really want to take anything away from hobbyist ray tracers, and I know a million people can show me really awesome ray traced pictures. But as fun and cool as ray tracers are, it doesn’t mean that they “cure cancer” as you say. Or, help you ship playable good looking games.
Likewise, Togelius. Here is a guy who probably enjoys and knows a lot about things like neural networks and evolving AI controllers. But, ah, what relevant things can you do with that stuff? You can perhaps generally advance the state of research in those particular subjects. Probably there are some other areas that academics can cite like “oh, so and so made a neural network that predicts stock option prices that such and such a bank uses”. But Togelius’s work, looking over his list of papers, is so obviously geared towards racing games that his targets for relevance are either the aforementioned “general advancement”, or actual usage in a real game. So, if he’s going to generally advance academic “AI”, awesome, and more power to him. But I guess he also wants to make a case that actual games can use his stuff, and here we see the “hammer looking for a nail”.
What either Intel (to take one example of a entity hyping RTR) or Togelius could do to gain respect is prove it. Let’s see a game using it…otherwise it’s all just crack pipe smoke, isn’t it?
Perhaps in a way it’s even a waste of time to argue over whether a given technique is useful or not; the market will judge.
The Beyond3D link is down for now, so I haven’t been able to RTFA yet, but I really wouldn’t be so fast to dismiss the hype especially since a lot of good people I know are being snapped up by Intel to play with their new multi-multi-core architecture.
Recursive ray tracing is the problem – it’s the “big sales point” for the one model fits all solution. I thing of RTRT as more a way of putting point sampling back into the rendering pipeline on a large scale. (See my patent #7212207 for more on using ray racing for visibility and scan conversion for display both in the same pipeline).
Once you get down to the one-poly per pixel world, all the advantages of using polygons to interpolate values is lost, so you’re back to point sampling geometry, and all the advances in texture caching will do you no good as all those texture derivatives you so carefully set up are useless. I truely believe we need to use the strengths of both ray tracing and scan conversion in the same pipeline to finally give us the elusive holy grail of real-time global illumination.
– Robin Green
Do we really need to get down to one poly per pixel? I’d be far happier if we could just redistribute polys in a nice way as we are already rendering more than one triangle per pixel (on average) on next gen console games.
Derivatives won’t be useless at all (as they still make sense!), but if you can garantee that your average primitive area is around one pixel GPUs would be designed in a different way (at they are now 75% of the ALUs would sit idle).
Moreover interpolations of values across a primitive is not the only advantage of rasterization; Pixar has been using their scanline renderer on subpixel size primitives for over 20 years now, while they use ray tracing only when it can save them time. There must be a reason for that (if we exclude inertia :) ), which is “rasterization is plain better than ray tracing at exploiting coherent rays”.
Now I need to have a look at your patent..
Marco
This talk of “one poly per pixel” and “more than one triangle per pixel (on average)” seems odd to me. Let’s say we are using 1280×720 resolution, then we have about a million pixels. Further let’s use a typical GPU running at about 500MHz, which can set up one triangle every clock cycle. So, it will take it about 2 seconds to render these million triangles, even assuming the other parts of the pipeline can keep up with the triangle setup. Hardly realtime or even interactive. Am I missing something?
Haha, never mind, got my zeroes out of place. I should have known that didn’t sound right.