Vide-Infra



a Generic Volume Renderer / Ray-Marcher

Languages /  Libraries Used: C++, OpenGL, GLSL
Team Size: Individual
Personal Role: Everything
Development Time: Ongoing since September, 2011

Note: This project has been put on hold while I am continuing my independent study at RIT on Real-Time Photon Mapping on the GPU. I plan to revisit this project in the future when I am more comfortable using OpenCL, with the hopes of really overhauling the performance and capabilities of this project.

Description: After the success I had with my previous volume renderer, and in light of the issues I faced and the improvements I wish I had time to make, I recently decided to undertake another project of a similar nature. This project is also largely inspired by my need to improve the supporting code and technologies of my running project The Rewrite Engine, and I hope to develop it as a proof of the engine's capabilities as well as a means to flesh out the finer points of the framework itself. The fundamental difference between Infra-Vide and VolTex is that the latter was restricted to the 9 week period of development that was allotted to me by my Computer Graphics II course at RIT, while Infra-Vide is to be casually developed at my own pace. As such, I am hoping that creativity will flow more freely with this project and I will be able to take more time to explore some of the aspects of traditional volume rendering that were beyond my reach within the course's time-frame. These aspects include, but are not limited to:


  • Added Functionality: One of the things I really wanted to make happen with the original VolTex implementation was the idea of transfer functions. The concept is simple really, you just take the final resulting value be it a color, density, gradient or whatever you want it to mean and run that logically maps it to a certain output. In the volume-rendering equations this translates to effective means to color or otherwise stylize different types of matter within, say, a human head based on their densities or some varying factor. If you download and play around with VolTex you'll see that I have an extremely limited application of this concept in place. You can pick one range of densities (and ONLY one range!) and map them to a single RGB value. It's more subtle in that it is not labeled but technically VolTex's ability to ignore certain ranges of density entirely is itself a transfer function as well. Both of these types are present and functional and serve their purpose, but what I would really like to do is implement a more dynamic and flexible version that can handle wider ranges and numbers, and also dimensions. Two-dimensional transfer functions can take another parameter such as gradient or depth and use that to further distinguish the color applied to the volume sample. The theory and implementation is certainly more complex, but I feel that it is something worth pursuing.
  • Enhanced Visual Aesthetics: Make it pretty! But in all seriousness, the first iteration VolTex did not do this as well as I felt it could have. It included a rudimentary Blinn-Phong illumination model that worked based on surface normals generated in real-time from the density gradients in the volume data sets. I haven't given this much thought, but it was fairly obvious that the major limiting factor in this algorithm was the real-time normal generation which required over a dozen samples to be taken from the volume data surrounding a given point to produce. I imagine that if I put my mind to it I can find better ways to achieve this effect in real-time, which leads into the third area I'd like to improve.
  • Optimization strategies: My original approach was not to say naive, but rather straightforward. I made large efficiency decisions early on such as rendering to intermediate buffers so that ray marching became a screen-space operation, and while these were very fast they were also complex algorithms to me and I was only barely capable of comprehending them let alone optimize them further. This time through I would like take a look at the algorithm in more detail and consider further optimization. The most obvious choice for any ray-based algorithm is a spatial subdivision hierarchy, but I'm not sure such a thing is applicable to a screen-space algorithm such as mine using simply two full-screen textures to generate the final image.

I'll update this on occasion with progress, but I'll also be updating with posts to the front page more frequently. For now here is the latest image I've snapped of the newly built Infra-Vide executable running entirely on The Rewrite Engine!