Saturday, April 2, 2011

Project Update 2

Okay, lots of research has been done and I think I know where I'm going with this. First things first, an update on the original project description:

  • Volumetric display of MRI data: This has become the focus of my project due to the subject's complexity.

  • Volumetric density mapping: This has been all but discarded from the project. The actual accomplishment of volumetric rendering is more than enough on my plate.

  • Three-Dimensional object segmentation: This too has been put very far onto the back burner. For the same reasons as density mapping.

Now then, with that in mind, here's what I've learned:

First and foremost, my initial method of rendering slices is being thrown out. I've already posted on this matter once before so I'll leave the specifics there. Essentially the only way I could accomplish this well is if I use a shader to blend between slices, and at that point I might as well just try a different method.

Therefore, the logical path is to implement volume rendering via RAY MARCHING! HOORAH!!!! [/sarcasm]

But seriously, this is a pretty cool concept. I'm not going to go into the specifics here for now because I'm more than certain that I'll be writing a lot about it for my mid-quarter update soon. What's important now is that it's INCREDIBLY cool and INCREDIBLY complex, as most things in CG seem to be...



So that's the justification for the changes to the original project. I'm going to probably spend the majority of my time implementing ray marching successfully. However once I DO do that, I'll be able to plug a lot of different things into the pipeline like transfer functions which will allow me to color my MRI scans accurately to the human body. I'm very excited about this.



I'll be updating here with my progress implementing ray marching.


Sub-Update 1


Okay so the research I've done lead me towards the method listed in GPU Gems Volume 3 (Link). The first thing that I have to do in this approach is be able to render the "back faces" of my cube. Thankfully since I'm doing this in shaders, all of OpenGL's state information is replicated and I can render a cube with glCullFace(GL_FRONT) in order to drop the front faces!


 

This is actually rendered with the first part of what will become my ray-marching fragment shader as well.

So yeah, that's done. Another update soon.


Sub-Update 2


Next up was making sure that I could render "ray march" the front faces of the cube as well. I put that in quotes because it's not really happening, but what IS happening is I'm managing to look up the texture correctly and pass it along to the ray marching algorithm:



That meant that I had the starting and ending positions of the ray encoded within both texture objects (the colors are actually XYZ positions) and I could calculate the ray directions, rendered below:





More to come soon.


Sub-Update 3


Success! Here's proof before I talk about the specifics...





So this is fully ray marched. The technique I was originally exploring actually fell through. I'm sure there's plenty of 3D gurus out there that can make it work, but at subset does not include me. The method I opted for is a little more traditional to the article I originally posted, and goes something like this:

  1. I render the back faces of the cube to a texture bound to a frame buffer object via glCullFace(GL_FRONT).

  2. I render the front faces of the cube to another texture bound to the frame buffer object via glCullFace(GL_BACK)

  3. I render a full-screen quad running my shader that makes the magic happen:

    1. I take the color values from the backface texture and subtract the colors from the frontface texture and interpret this as position information, this gives me a start point, end point and direction for each ray.

    2. At this same coordinate in the fullscreen quad, I step my rays through with the traditional method of ray marching and accumulate color information from the 3D texture representing the volume (this could be replaced with a density function as well)

  4. I then render this full-screen quad to the screen, and we get the final image!

As luck would have it, this method was extremely easy to implement, but only came to me after DAYS of agonizing failure with the other methods.

Next up: Transfer functions, better camera control (I still can't zoom) and more MRI datasets!


Sub-Update 4


This is more of a sub-sub-update. I'm working on an all-purpose dataset loader and I've got my hands on a few more volumes that I can render at this point in time. Also I've been reading up on transfer functions and have been toying with those as well. Here's a rendering of a yellow foot with blue bones:



Also, a red skull:



Revised my blending equations, skull again:




Sub-Update 5


Didn't really follow the timeline I put in the midterm update. But I got a UI set up and I've managed to load some more advanced datasets (after a frustrating weekend).




No comments:

Post a Comment