I found dealing with content available online for free was a bit of a limitation. Beyond that, getting set up took more time than I had expected. I had to put together code for handling the basic math; Luckily I'd already written some of it for some OpenGL work I had done in the past.
Learning The OpenGL Shading Language went fairly quickly, but while I'm able to use it, I still need to look things up from time to time. The most thing I found most difficult had to do with interactions between OpenGL and the shading language.
Do you need to enable fixed function texturing to access texture data in GLSL?
Do you need to enable client states when using vertex buffer objects?
Do you need to enable fixed function lighting when using the fixed function lighting parameters in GLSL?
For a while I wasn't sure if I was encountering driver bugs in the Leopard drivers for my ATi 9700 Mobility, or if I had a bug in my code, or if I simply didn't understand the interactions.
It turns out that some of my confusion was due to driver bugs, and things became much more clear after switching to new hardware.
I've discovered that one of the most fundamental operations in modern real-time rendering is rendering to a texture. That said, it's what you do with the texture that makes things interesting. A number of effects are the result of post-processing rendered information. To that end I wrote a deferred shader backend that stores information required for shading in a number of buffers and composites them in the end. I looked at ( but didn't have time to implement ) writing a water shader, which follows this formula fairly closely. I read the articles in GPU Gems about wave simulation, and felt I got a handle on them, but I didn't understand how to actually do the shading. After writing the deferred shader backend and the reflection/refraction shader, the basic operations were obvious.
Assume your water surface is a plane (this assumption will be broken later, it is used for generating reflection and refraction maps). Render pools of water setting the stencil buffer where ever water fragments pass the stencil-depth test. Scale the scene by Y=-1 and re-render to a texture. Now you've got an inverted version of the scene stored in a texture. Next, render the scene below the water plane and store it in a texture, and you've got your refraction texture. Finally, render the scene normally, with some simulation of the waters surface. When shading the water, simply look up into your reflection and refraction maps to calculate those terms, and potentially use the depth of things below the water plane as a fogging value.
They wont be perfectly accurate because they assumed a planar water surface, but they should be close enough to do a convincing job.
I studied Preetham's Sky Model, and implemented it, but was disappointed with the results of the simplified atmospheric scattering equations. Apparently, Nishita's Sky Model was used for Crysis, and I had a look at it, but didn't get as far as an implementation.
Ambient Occlusion was a rewarding topic to spend time on. I feel like I've managed to learn a great deal about it, and I'm looking forward to trying out some ideas I have to improve the quality of my renders. I'll need to acquire better model and texture data to make it worth my while, but I've come across a handful of models this evening that might be useful in that regard.
Shadow maps were simply infuriating to work with. The technique is quite simple, but there were driver problems I wasn't aware of. I implemented the the technique twice, and spent a good amount of time debugging before trying it on a new machine only to find that it worked.
I'm excited by Variance Shadow Maps, and Exponential Shadow Maps, and I'll probably do an implementation of both techniques. As I understand VSM (and ESM to a lesser extent) suffers from light bleeding, but there are adaptations to reduce the artifacts.
Spending time learning about recent extensions to OpenGL proved rewarding, and has definitely changed the way I write OpenGL code. The extensions should have that effect, since OpenGL is moving towards an object oriented structure and away from its state machine heritage.
I'm going to start exploring DirectX 10, and I'd love to write some OpenGL ES 2.0 code, but I'm really just waiting for GL 3.0.
Andrew.
2 comments:
You are right both ESm and VSM suffer from light bleeding, though in completely different ways.
ESM breaks down when the receiver is not planar within your filtering window.
You can have the most complex object in the world, with ESM its shadow will be perfect if casted on a plane.
Now make that plane a sequence of stair steps and you will observed light bleading on the steps depth discontinuities (as seen from the light POV).
Bigger discontinueties -> bigger artifacts.
I'm working on some sort of improvement in my spare time, haven't found anything that is completely robust so far (or more robust than ESM without breaking what I already have)
Marco
Sorry it took so long to publish this comment. I put this blog away after finishing my course work, and only recently decided to continue updating it.
With regards to your efforts towards robust shadow maps, have you made any progress? I haven't had a chance to dig into the ESM/VSM material in any serious depth yet, but I'm looking forward to it toward the end of the summer.
Post a Comment