Thursday, April 24, 2008

Course Review

My independent study has come to an end, but I'm going to continue exploring a number of the topics I learned about. I've gathered a foothold into current techniques, but there is a lot left to cover, and "current" doesn't last very long.

I found dealing with content available online for free was a bit of a limitation. Beyond that, getting set up took more time than I had expected. I had to put together code for handling the basic math; Luckily I'd already written some of it for some OpenGL work I had done in the past.

Learning The OpenGL Shading Language went fairly quickly, but while I'm able to use it, I still need to look things up from time to time. The most thing I found most difficult had to do with interactions between OpenGL and the shading language.

Do you need to enable fixed function texturing to access texture data in GLSL?
Do you need to enable client states when using vertex buffer objects?
Do you need to enable fixed function lighting when using the fixed function lighting parameters in GLSL?


For a while I wasn't sure if I was encountering driver bugs in the Leopard drivers for my ATi 9700 Mobility, or if I had a bug in my code, or if I simply didn't understand the interactions.

It turns out that some of my confusion was due to driver bugs, and things became much more clear after switching to new hardware.

I've discovered that one of the most fundamental operations in modern real-time rendering is rendering to a texture. That said, it's what you do with the texture that makes things interesting. A number of effects are the result of post-processing rendered information. To that end I wrote a deferred shader backend that stores information required for shading in a number of buffers and composites them in the end. I looked at ( but didn't have time to implement ) writing a water shader, which follows this formula fairly closely. I read the articles in GPU Gems about wave simulation, and felt I got a handle on them, but I didn't understand how to actually do the shading. After writing the deferred shader backend and the reflection/refraction shader, the basic operations were obvious.

Assume your water surface is a plane (this assumption will be broken later, it is used for generating reflection and refraction maps). Render pools of water setting the stencil buffer where ever water fragments pass the stencil-depth test. Scale the scene by Y=-1 and re-render to a texture. Now you've got an inverted version of the scene stored in a texture. Next, render the scene below the water plane and store it in a texture, and you've got your refraction texture. Finally, render the scene normally, with some simulation of the waters surface. When shading the water, simply look up into your reflection and refraction maps to calculate those terms, and potentially use the depth of things below the water plane as a fogging value.
They wont be perfectly accurate because they assumed a planar water surface, but they should be close enough to do a convincing job.

I studied Preetham's Sky Model, and implemented it, but was disappointed with the results of the simplified atmospheric scattering equations. Apparently, Nishita's Sky Model was used for Crysis, and I had a look at it, but didn't get as far as an implementation.

Ambient Occlusion was a rewarding topic to spend time on. I feel like I've managed to learn a great deal about it, and I'm looking forward to trying out some ideas I have to improve the quality of my renders. I'll need to acquire better model and texture data to make it worth my while, but I've come across a handful of models this evening that might be useful in that regard.

Shadow maps were simply infuriating to work with. The technique is quite simple, but there were driver problems I wasn't aware of. I implemented the the technique twice, and spent a good amount of time debugging before trying it on a new machine only to find that it worked.

I'm excited by Variance Shadow Maps, and Exponential Shadow Maps, and I'll probably do an implementation of both techniques. As I understand VSM (and ESM to a lesser extent) suffers from light bleeding, but there are adaptations to reduce the artifacts.

Spending time learning about recent extensions to OpenGL proved rewarding, and has definitely changed the way I write OpenGL code. The extensions should have that effect, since OpenGL is moving towards an object oriented structure and away from its state machine heritage.

I'm going to start exploring DirectX 10, and I'd love to write some OpenGL ES 2.0 code, but I'm really just waiting for GL 3.0.

Andrew.

Saturday, April 19, 2008

Reflection and Refraction


I haven't tone mapped the image, but here's a shot of a reflection & refraction demo. The cube map is generated using one of Paul Debevecs light probe images in cross-cube format. Look for the ImageIO post for code to read images using Apple's Image IO toolkit.

Preetham Sky Model

Here's a shot from my implementation of Preethams Sky Model


A few screen shots

Two shots comparing a texture mapped quad and parallax normal mapped quad:


Friday, April 18, 2008

Deferred Shading

I wrote a deferred shading demo to get familiar with the technique. The lack of support for translucent objects and MSAA are non-issues for me. I can write a separate path for translucent objects, and I can do some hackery or switch to DX10 to get MSAA. Prior to writing the demo I had expected that it was a tool I would use most of my rendering needs; Now I look at it as a tool to use in certain situations. I need to play with it further to see if it's generally useful to me.

Monday, April 14, 2008

Screen Space Ambient Occlusion

I've had my fun, I think I'll adopt Crysis style SSAO.

To resolve artifacts and improve image quality in method I'm using, I've got to start taking samples from the depth buffer, finding world space positions and re-projecting anyway.

GLSL Array Index Problems & Software Mode

I was revisiting my per-pixel lighting code, just making sure it was simple and clear. I pulled out the attenuation (and spot attenuation) code and tossed it into its own function.

Bam. Software Rendering.

So what happened? Originally I had specified literal light-array indices. Once I added in the functions for attenuation, I passed the indices as const ints. Apparently the GLSL driver on my mac can't deal with this, and decides that the best way to handle the situation is to drop to software rendering mode. It would be nice if GL told you it was dropping to a software renderer; I would prefer crashing to failing silently.

I'm getting to a point where I'm seriously considering DX10 again. I've already installed Vista via Boot Camp....

Thursday, April 10, 2008

Screen Space Ambient Occlusion








I decided to take a couple of days and try developing my own SSAO algorithm. The images are the computed AO coefficients. I'm not finished yet, but the images give an idea of the evolution of the algorithm.

Andrew.

Sunday, April 6, 2008

Parallax Mapping

I've implemented parallax mapping and thought I'd throw up a picture. I'd like to extend it to parallax occlusion mapping, but I'll have to see how much time I have.