Anyone with a keen eye that has used any of today’s Real-Time Rendered Graphics applications (read: played any games) has probably taken notice of the main topic of this entry – nothing lights like it is supposed to! We have all these bright, nice shiny bricks on our virtual buildings that are more accurately modeled with a strictly diffuse Lambert term (which we’ve been able to do real-time for close to 20 years) than with the Phong reflectance model that is used is most applications today. The Phong model could easily be tuned to model this effect very easily (just limit the specular and glossy components) but it appears that bricks laminated and coated with slime are more visually appealing to the graphics designers than something even reasonably plausible.
This is one of my biggest gripes with graphics today and has been for over 5 years. Not just with the over (and disgustingly inaccurate) dramatization of lighting effects that’s been going on but with the entire direction the lighting systems in modern real-time renderers have been heading in general.Most renderers take a single lighting model and apply it to ever surface in the world with some sort of per-surface tuning in the form of specular maps, gloss maps, etc.With a lot of fine-tuning by excellent artists, this can actually produce rather believable results.In some of the “better” rendering systems multiple lighting models are supported where you can draw some surfaces with the Phong lighting model, some with Ward, some with Lafortune, some with He-Torrance, etc.You’re still basically fighting a losing battle; however, as you’re trying to model what’s really a minimum of 4-dimensional function (the BRDF) with a series of 2D snap-shots and trying to figure out what “looks right”.The world would be a whole lot happier if developers would integrate some sort of actual BRDF data, measured from the real-world, into their pipeline to at least fit some of these analytical models to.There’s literally tons of free BRDF data available to anyone with an internet connection already.None of this have I seen used in any commercially available real-time rendering package.
Both of these solutions are analytical models used to approximate the true reflectance function of real surfaces – they’re guaranteed right off the bat to never be completely correct (and most of them aren’t even physically correct in theory).This is analogous to having a set of random points that you’d like to fit a curve to.The problem is we have to use a set function for all surfaces and fit drastically different (some almost completely incoherent) data to it – there’s bound to be a wide range of surfaces that fall by the wayside in this approximation.
A theoretically much more sound approach to all of this is to note that every surface in the real-world has its own distinct BRDF.Using one BRDF for all surfaces is just not going to cut it.Add a few more models to the mix and you’re definitely getting closer to what you want, but you’re still falling short.It’d be best if we had some sort of data driven approach where the lighting model wasn’t even a mathematical function as it is today and is simply some sort of “texture” (I use the term loosely here) that we just reference with the incident light vector, the outgoing eye vector and the wavelength of the light we’re trying to sample.We then set associate a BRDF for each texture/surface.In the lighting pass we simply reference the associate BRDF and boom – bricks look like bricks, plastic looks like plastic, wood looks like wood, etc no matter where you put the light and no matter what direction you view the surface from.This is essentially the holy grail of reflectance modeling (at least, after it has been extended to include sub-surface scattering and anisotropic reflections).
So where’s the problem?Remember a BRDF is, at a minimum, a 4-dimensional function – and we’re trying to tabulate all that data at a reasonable level of precision such that we don’t see any (much) banding or ringing.Clearly we now have a [massive] compression issue – we’d like to take all this data (that can easily be over 16MB per BRDF) and compress it in such a way that the memory print is acceptable (say 256-512kb), the core integrity of the data is preserved and we want to decompress this data on the fly extremely quickly.Luckily, there has been a good amount of research done in the academic community towards this end.Since the data we’re trying to compress is over the hemisphere of the point of incident, spherical compression models are an obvious first place to look.Among these we have (to name a couple) spherical harmonics and wavelets.Spherical harmonics have very promising results when we limit our scope to extremely low-frequency lighting, but as soon as you up the frequency of the lighting you start needing far too many coefficients to model the BRDF not to mention all the banding that occurs even with the copious amount of coefficients.
The solution, for the time being, seems to lye with wavelets or some form thereof.There has been a number of research papers published recently (and many not so recent) on using wavelets to compress BRDFs and subsequent decompression in real-time for use in real-time lighting.A couple of these papers manage to get quite impressive results too with BRDFs compressed to a maximum of around 256KB.The real-time rendering portions of their techniques leave a bit to desire, though – render rates are generally around 5-10 Hz with extremely simple scenes at low resolution on the fastest hardware around today.This actually boggles my mind a little, as I was able to make a few tech demos based on the techniques they describe in their papers (although using none of their code).I was able to achieve roughly equivalent results as far as quality goes, but the render rates were a full 10 orders of magnitude greater for far more complex scenes on far less sophisticated hardware.That’s the difference between a technique that never leaves the academic community and one that’s used as common practice in current and next generation games/real-time rendering.
I’m at a loss at what exactly was hampering their implementations so drastically, as all I did was to basically follow their instructions (granted their instructions were more on the conceptual level, leaving tons of room for varied implementations) and yet we came to two completely separate levels of performance in our final implementations.As soon as time (which is so incredibly short these days) allows, I plan to do a much more thorough investigation of what exactly is going on here.
Anyway, in summary – if we’re not there yet we’re damn close to being able to have completely data-driven, physically and aesthetically correct BRDFs defined on a per-surface basis.No more Phong shaders ran on every surface in the world, nor a list of hundreds of different light shaders that an artist has to weed through in order to find out which one works for the surface they’re trying to model.There will be one data-driven lighting function across all surfaces that allows all surfaces to be lit exactly as they should as defined by the BRDF associated with the surface.
Even if you don’t go this direction with your renderer – please, I beg of you, at least start using actual measured BRDF data to fit your lighting model to the surface you’re trying to represent.Using nothing but an artist’s eye to fit 4+ dimensional data to a single lighting model is never going to work out right.Fine tuning the data, after it’s been fit as closely as possible to the real-world data, is fine; but at least use some sound, actual data somewhere in your pipeline to make your fits.It won’t in anyway affect your rendering performance but it will make everything look a few orders of magnitude better.
[Coming: Lots more BRDF databases as soon as I dig out my favorites from my other computer]