Tuesday, November 01, 2005
HDR (High Dynamic Range) Technology
(Review) - We’ve all played Half-Life and it’s sequel Half-Life 2. The difference between the two games, in terms of graphics, is tremendous, and now Valve has gone ahead and updated the gaming engine to give you a level of detail and realism that you thought wouldn’t be possible until perhaps the next round of game releases.
HDR, or High Dynamic Range, is a lighting process that’s been designed to emulate in-game or artificially generated lighting to closely mirror the changes we see in the real world.
In simpler terms, HDR allows you to make the objects brighter by allowing them to use the full brightness capabilities of the monitor and not just the brightness level at which they have been shot with (or rendered with) in the scene.
HDR is, by definition, the ratio of the largest to lowest measurable value of a signal. As of today, the 16-bit formats use color component values from 0 (for black) to 1 (for white), but you can’t define colors with increased vibrancy and shine by inputting value 2 for white to make it whiter than its traditional shade. This can limit lighting effects such as the glint on the metal blade of POP Warrior Within.
Using HDR, you can specify values that are far outside the redundant 0-1 ranges we are used to currently. To give you an everyday example, when you drive on a sunny day, it often happens that the minute you come out of the tunnel, the sunlight seems blazingly brilliant as your eyes take sometime to adjust to the difference in the light intensities. In a game like NFS, replicating this realistic phenomenon is difficult and nearly impossible for the lack of the ability to specify whiteness beyond level 1, but with HDR, you can accomplish just that, which is why it’s important to gamers that demand realism from their games.
Up until now, such effects were being achieved by a technique known as Blooming. This technique allows you to let the light from an overly bright object spill on to the particles around it, thereby making them appear brighter and ensuring enhanced visibility in titles.
The process, however, does not just work to increase the brightness of whites, but it also ensures that the blacks appear blacker and deeper while enhancing the subtle details of the image.
How does it work? Traditionally, images are stored in the RGB format, where each pixel knows exactly how much of these three colors it’s supposed to display to give you accurate images.
The problem with this is that an image might be very bright, but how much of that brightness we see is dependent solely on the monitor we are displaying it on and no monitor in the world today can display anywhere close to the range of brightness levels that we can experience through our eyes.
We all know that we can shoot various photographs of the same scene and make it look completely different by just changing the exposure settings. For instance, if you’re taking the photographs of the night sky in the Auto mode of your camera, it will come out mostly black and will be pretty much useless, but if you put the shutter speed at around 10-15 seconds and then take a photograph by keeping all other settings constant, you will get a completely different look and feel of the same night sky with greater depth and detail that you missed earlier with Auto mode. The problems with this kind of photography are obvious because if your scene has a bright object in it, it will get completely killed due to over-exposure.
Basically, if you take picture with exposure at a low setting, you’ll be able to capture greater details of overly bright objects, and if you take the exposure settings to a very high level, then you’ll be able to get the images of even the most dimly lit objects and here in lies the contradiction.
We want to see even the dimmest objects, but we obviously can’t do without seeing the most well lit so there needs to be some kind of a compromise.
Therefore, the concept of a radiance map was proposed. The concept was simple, and directly derived from the conflict explained above. The idea was that we could capture multiple levels of brightness (from dim to over exposed) by taking multiple shots of the same scene.
We just need to compare the various shots and store the different brightness values. In this way, we would map the entire image based on the varied brightness levels. The information will then be stored (along with the color information of the image), so each pixel will have a fourth information value containing the exposure/brightness value. From there, the entire image would be stored in an RGBA format.
The obvious question in your mind would be that if the monitor is simply not capable of displaying the "over bright" areas/data, what is the point of going through all this trouble, since the information will discarded anyway?
The question will be valid, but when developers (Valve) investigated this, they realized a lot this could be done with this information, namely producing sharp blur effects, which, when done on static regular images, would appear washed out and unreal.
Another possibility of playing with lighting was radiosity. Radiosity is a way of rendering a scene using only visible light sources. What this means is that if you are inside a dome in Quake 3, the only area lit up in the scene will be where the light from various sources in the room is reaching.
Logically, using this technique would imply that if light doesn’t reach a particular area of the room, it’ll stay unlit and hidden. To counter this effect, most game developers put in hidden lighting so you can see everything in the room.
Using HDR, the developers don’t have to use hidden lighting and can render lighting completely through the radiosity method. The advantage that this would give them is that they can achieve even more realism as radiosity will allow them to control light’s behavior in such a manner that they would be able to achieve effects such as change in lights characteristics (mostly color) when it passes through colored glass tiles and such.
Another effect they were able to generate using HDR is something that has been implemented in Far Cry called Blooming. The effect observed is that a lot of times bright light from a source appears to be coming out from around the edges of thin objects placed in front of it. This effect allows developers to create the phenomenon we experience when moving from dimly lit areas to a bright sunlit environment.
Despite multiple problems that delayed, and thus ensured that Half-Life 2 did not ship with HDR as a standard, Valve finally did manage to achieve what it had intended earlier. It did so by going the multiple capture way where it stored three differently exposed images of every scene and tried to gauge the diverse levels. It then made the engine "smart" to realized which lighting will affect what area, and by how much? Then it stored only those values instead of storing every single aspect of the image.
This technique is really wonderful and sensible, as it doesn’t require you to purchase a new line of graphics cards, and will work with most of the NVIDIA and ATI graphics solutions.
Currently, if you want to really experience HDR, unfortunately, you would have to wait for HL-2: Lost Coast to become available, which is expected to launch soon.
With HDR, Valve has taken another step in not only increasing performance numbers, but also depth and realism to game titles. It’s now up to the others to play catch up, and we are sure HDR would become a common visual enhancing technologies in upcoming titles.
source:http://www.cooltechzone.com/index.php?option=content&task=view&id=1931