Monday, April 25, 2011

CG2: Assignment 6

Refraction:





Total Internal Reflection kind of looks bad in this shot since my background is black. But it's there, and that's been my basic scene so I thought I wouldn't change it.

Here's a spruced up scene so you can see the TIR more clearly:





I'll update when I finish the extra.



UPDATE:

Extra...



I missed class and don't know if how to do this was discussed, but essentially what I did was start with a light transparency of 100% for a shadow ray. Each time an object was hit on it's way from a point on an object to a light source, the light transparency factor was multiplied by the transparency of the object in between. After accumulating all of these, I multiply the final shading for that point by it to get the effect of being in some form of shadow.

Wednesday, April 13, 2011

CG2: Assignment 5

This one was easy. I'd already set up my ray tracer to deal with recursion, but hadn't had a use for it so it did nothing. I'd also already written a function CalculateReflection(Vec3D toSource, Vec3D surfaceNormal) that does exactly what is necessary to reflect a ray off of a surface with a given normal. With this at hand and my framework already set up for recursive tracing, this literally amounted to little more than the addition of an if statement.



Note that this was done with 4x supersampling (16 samples per pixel).



The only real interesting snag I hit was when I briefly didn't realize that using the previous ray's point of intersection as the origin for the reflection ray wouldn't work. For obvious reasons, the reflection ray will sometimes decide that the closest intersecting object is the object that it's reflecting off of because of that. Adding to the reflecting ray's origin an extremely small EPSILON value of 0.00001 in the direction of the outgoing ray solved this problem.


Extra #1


I got sick of the boring backdrop, so I added to the scene a bit now that there's a use for it with reflections:

This one has super sampling





Here's one without, you can see the randomness of the rays picked per cone much clearer but the picture isn't as nice.



These were taken with a +/- 5 degree spread for each cone.

Sunday, April 10, 2011

CG2: Midterm Project Update

Alright, some preliminaries...

 

Name: Michael Mayo

Professor: Warren R. Carithers

Course: 4003-571

Project Updates URL: http://mseanmayo.uni.cc/news/?cat=6

 

Now then. My original project proposal didn't quite adhere to the template provided for whatever reason (I think I just completely forgot it existed) so I'll have to improvise on the "schedule" I've been following so far.

 

Here's a quick run-down of the original objectives I had for this project, along with what I have completed or revised:

  • Volumetric Display of MRI data (Revised, see below)

  • Volumetric Density Mapping (Removed)

  • Three-Dimensional Object Segmentation (Unnecessary)


 

So yeah, it would APPEAR as if I had failed in my original plans for this project. However I can assure you anything BUT has occurred. A timeline was also absent from my original proposal, so I'll have to get right into the meat of what I've accomplished so far and then go from there.

 

What's been done


As is mentioned above, MRI data visualization has been revised while the other two core items have been either dropped or deemed unnecessary. What this boils down to is the realization I had after achieving initial results that there was more than enough work to be done with MRI visualization in general. I decided that rather than focus on developing interesting density equations to display interesting effects, or try to segment the volumes themselves, I'd focus all of my energy into developing as much visualization tech. as possible for MRI data. Before I go on, here are the results of such visualizations thus far:



 

As you can see, there are quite a number of ways to render the data. My initial attempts focused on a rudimentary technique based off of the original "slices" of the images that make up the 3D texture. I wanted to draw them each on 2D quads in 3D space with a bit of alpha blending between them in order to simulate the effect of peering into a 3D volume. I ran into problems with this approach both due to aliasing effects when looking from the side (2D flat images are quite boring from the profile...) as well as high polygon counts making the approach less than viable if I wanted to do any special post processing or work with the data in any way. Here's an example of the aliasing effect I was encountering:



 

I started reading about a more advanced method for volume visualization known as ray marching, in which rays are fired into the volume and accumulate density values until an opaque threshold is met and the ray terminates. This method was daunting at first due to its complexity. I was worried at first I didn't understand the theory well enough to implement it, and that once I did I wouldn't be able to get it to render fast enough to be interactive (I had this quarter's experience with the ray tracing assignments to reflect on in that respect). However after much frustration I DID manage to get it working with shaders, which gave me all the performance I would need. Pic related:


 

There's tons of more info about how this works and what I've done, but I'm not sure how much I'm supposed to include for this update. I've documented most of it in other posts however, so here they are: link 1, link 2.

 

My general assessment of my own work on this project so far is very positive and optimistic. Initially, as the links above detail, I felt I was far too ambitious with this project. It took me quite a bit of time to get ray marching working correctly, and I had a few other crucial roadblocks in my progress. Overall I think I'm actually getting more done than I had originally envisioned, and faster than I had anticipated. I'm working on cleaning everything up at the moment and adding support for a few different file formats, and then I'll start adding features again and getting this ready for the presentation.

 

What's next


This sums up what I would like to get done for the final presentation, prioritized highest importance first:

  1. Implement Transfer Functions to allow the MRI data to be isolated as the user sees fit.

  2. Provide a User Interface of some sort

  3. Add support for more file formats (public MRI data comes in several)


 

Monday, April 11th marks the beginning of Week 6. I anticipate that transfer functions be implemented by the start of Week 7,  with a user interface of some sort and extra file format support added in the final Week 8 / Week 9 stretch before presentation.

Saturday, April 9, 2011

CG2 Assignment 4

So we had to do procedural shading this time. Checkerboard texture was required so here it is:





Interesting bit about this is that I had a really hard time finding out how to calculate UV coordinates for a quad. If you use triangles, you can apparently do some fancy stuff with barycentric coordinates to figure out UVs, but I couldn't really work that out and my triangle intersection code generates parametric coordinates instead (I guess this is more efficient for intersection tests). So what I ended up doing instead was figuring out the math on my own using vector projections as described on this wikipedia page. Here's my method:

After I've got the intersection registered for the quad, I do the following calculations:

  1. Math::Point3D PoS = ray.origin + ray.dir*intersection.distanceFromEye;

  2. Math::Vec3D uvOffsets(PoS - myOrigin);

  3. intersection.u = Math::Dot(myEdge1, uvOffsets) / myEdge1.lengthSquared();

  4. intersection.v = Math::Dot(myEdge2, uvOffsets) / myEdge2.lengthSquared();

And that's it, intersection.u/v holds the UV coordinates. The cool thing is that the actual projection equation to get the components of the offset relative to the edges of the quad actually only requires the dot product to be divided by the edge's length, but then to normalize the coordinates to u/v space I needed to divide by the length again. Obviously that means that the two divides could be simplified by dividing by the length of the vector squared, avoiding a costly square root calculation on both parts.

I'll update this space as I finish the extras. I'm shooting for a mandelbrot as extra #1.


Update #1: Mandelbrot


Here it is:



Kinda hard to see, but it's there. I'm just drawing it on the floor quad which is why it's at a weird angle.

Saturday, April 2, 2011

Project Update 2

Okay, lots of research has been done and I think I know where I'm going with this. First things first, an update on the original project description:

  • Volumetric display of MRI data: This has become the focus of my project due to the subject's complexity.

  • Volumetric density mapping: This has been all but discarded from the project. The actual accomplishment of volumetric rendering is more than enough on my plate.

  • Three-Dimensional object segmentation: This too has been put very far onto the back burner. For the same reasons as density mapping.

Now then, with that in mind, here's what I've learned:

First and foremost, my initial method of rendering slices is being thrown out. I've already posted on this matter once before so I'll leave the specifics there. Essentially the only way I could accomplish this well is if I use a shader to blend between slices, and at that point I might as well just try a different method.

Therefore, the logical path is to implement volume rendering via RAY MARCHING! HOORAH!!!! [/sarcasm]

But seriously, this is a pretty cool concept. I'm not going to go into the specifics here for now because I'm more than certain that I'll be writing a lot about it for my mid-quarter update soon. What's important now is that it's INCREDIBLY cool and INCREDIBLY complex, as most things in CG seem to be...



So that's the justification for the changes to the original project. I'm going to probably spend the majority of my time implementing ray marching successfully. However once I DO do that, I'll be able to plug a lot of different things into the pipeline like transfer functions which will allow me to color my MRI scans accurately to the human body. I'm very excited about this.



I'll be updating here with my progress implementing ray marching.


Sub-Update 1


Okay so the research I've done lead me towards the method listed in GPU Gems Volume 3 (Link). The first thing that I have to do in this approach is be able to render the "back faces" of my cube. Thankfully since I'm doing this in shaders, all of OpenGL's state information is replicated and I can render a cube with glCullFace(GL_FRONT) in order to drop the front faces!


 

This is actually rendered with the first part of what will become my ray-marching fragment shader as well.

So yeah, that's done. Another update soon.


Sub-Update 2


Next up was making sure that I could render "ray march" the front faces of the cube as well. I put that in quotes because it's not really happening, but what IS happening is I'm managing to look up the texture correctly and pass it along to the ray marching algorithm:



That meant that I had the starting and ending positions of the ray encoded within both texture objects (the colors are actually XYZ positions) and I could calculate the ray directions, rendered below:





More to come soon.


Sub-Update 3


Success! Here's proof before I talk about the specifics...





So this is fully ray marched. The technique I was originally exploring actually fell through. I'm sure there's plenty of 3D gurus out there that can make it work, but at subset does not include me. The method I opted for is a little more traditional to the article I originally posted, and goes something like this:

  1. I render the back faces of the cube to a texture bound to a frame buffer object via glCullFace(GL_FRONT).

  2. I render the front faces of the cube to another texture bound to the frame buffer object via glCullFace(GL_BACK)

  3. I render a full-screen quad running my shader that makes the magic happen:

    1. I take the color values from the backface texture and subtract the colors from the frontface texture and interpret this as position information, this gives me a start point, end point and direction for each ray.

    2. At this same coordinate in the fullscreen quad, I step my rays through with the traditional method of ray marching and accumulate color information from the 3D texture representing the volume (this could be replaced with a density function as well)

  4. I then render this full-screen quad to the screen, and we get the final image!

As luck would have it, this method was extremely easy to implement, but only came to me after DAYS of agonizing failure with the other methods.

Next up: Transfer functions, better camera control (I still can't zoom) and more MRI datasets!


Sub-Update 4


This is more of a sub-sub-update. I'm working on an all-purpose dataset loader and I've got my hands on a few more volumes that I can render at this point in time. Also I've been reading up on transfer functions and have been toying with those as well. Here's a rendering of a yellow foot with blue bones:



Also, a red skull:



Revised my blending equations, skull again:




Sub-Update 5


Didn't really follow the timeline I put in the midterm update. But I got a UI set up and I've managed to load some more advanced datasets (after a frustrating weekend).