Tuesday, March 29, 2011

Raytracer: Assignment 3

So we had to implement Phong illumination for this assignment. Here's the results:

From CG2 - Week 3



I've got the basics done. I don't think it's very efficient, I'll probably tweak some values and see what happens. Also, there were a couple of extras to be done, namely implementing Phong-Blinn illumination and multiple light sources, which I haven't done yet. That's next.


Update #1: Multiple light sources


Well, this actually wasn't any additional functionality, I just had to add another light to the scene. Gotta love modularity.

From CG2 - Week 3

The instructions were to make it obvious that there was more than one light in the scene. So I put two of them behind the scene. The red circles were added afterwards, and surround the two point light sources contributing to the scene.


Update #2: Phong-Blinn illumination


Well, just what it says. Here it is.

From CG2 - Week 3



I also implemented Cook-Torrance Illumination just for the hell of it, not sure if I got it 100% though:

From CG2 - Week 3

Note: I'm hard-coding the roughness at 1.0 and the refractive incidence at 0.0 since my ray tracer currently doesn't support refraction and I don't feel like getting look-ups for roughness values.

Monday, March 28, 2011

Project Update 1

This is more of an update for me, to gather my thoughts so far...

So I've started this project, and I'm realizing that there's a bit more to volumetric rendering than I had first thought. The general idea is really simple, but that's also what makes it so risky. Most of the reading I've done on the subject (it's actually tougher to find info on than one would think) points to the quick-and-dirty method, rendering via slices:

[caption id="" align="alignnone" width="500" caption="Source: http://charhut.info/files/cs280/volume1.png"][/caption]

So in the above picture you'd be rendering a crude sphere. Obviously aliasing is a massive concern here. But nevertheless I decided I'd try this approach first. My initial attempt was promising:


However there's that bit of aliasing in the right photo from looking at the slices from the "side". Sort of like the demo pic above. It It looks horrid, and will seriously interfere when I start using complex volumetric textures instead of a big green block. I thought adding alpha blending would remedy this, but while it made the block look cooler it actually made the aliasing more apparent...just smoother:

From Voltex



So this method wasn't going to work. Or at least not the way I wanted it to. I also experimented with rendering three dimensions worth of slices, oriented to each axis in turn. This DID result in a near-perfect rendering of the volumetric cube from all angles, but it also tanked my frame rate for obvious reasons. I threw the code into a display list and compiled it just to see, but it actually didn't help much. Even if the speed had been bearable, when I was doing the 3-dimensional slice renderings I wasn't using alpha blending, I have a feeling the aliasing would have come back once I did...



So now I've got to try another route. I stumbled across some old PS3.0 demos on NVidia's site yesterday, and one of them happened to be on volumetric texture rendering. I had a peek at the demo code and saw that they use a different approach of ray-marching a solid cube in the pixel shader. It also runs extremely fast compared to my old-fashioned OpenGL code, and the aliasing is completely absent in all cases. All around it just seems like the right direction to go, although I'm very inexperienced with shaders. I'm going to try this over the next week or so and see if it gets me closer to what I'm aiming for.



On the subject of the MRI data. The one dataset I have in my possession is just a folder of JPEG images. I've managed to load these into my 3D texture volume using a great little C library called SOIL. Unfortunately the fact that they are jpg images means there's no alpha blending at all, so the renderings are pretty boring and not worth showing here. I haven't decided what I want to do with them in that case. One thought is to preprocess them with photoshop's batch image action and see if I can apply alpha to the images based on pixel contrast, this might work since they are grayscale, but I'm very inexperienced with photoshop as well. Another idea is to implement color keying to blend out all of the black, however this would be a crude way to do it, and would miss the finer details of the dataset. The option I was looking at prior to stumbling upon the ray-marching demo was to implement the conversion in a pixel shader, and essentially do what photoshop would do for me on the fly. This still might be possible, especially since I'd be dealing with shaders anyway at that point, but I'd rather not over-complicate the shader code if I don't have to.



As for other MRI datasets. I found a website absolutely FULL of them: Link

The only issue is that the license they are under requires them to be packaged in a special format that guarantees that whenever the data is distributed the property rights are included. However, they also distribute an open-source program that can read and render the dataset in these files meaning I can learn to open them myself. This is next after I get my volumes rendering correctly and the initial MRI data drawing how I want it. I've had a look at the code for opening these special files and it seems pretty easy to implement, so it shouldn't be too much work. Plus then I'll have MUCH clearer MRIs to work with!



On a final note: I initially intended to do procedural generation via volumetric density algorithms as state in Ken Perlin's paper. This is currently on the back burner. I'm hoping that everything else goes smoothly and I get to it, but I'm starting to see that rendering the volumes efficiently can itself be a complicated process, and I'd rather get that working with some quality MRI data before I experiment with generating something on my own.

Tuesday, March 22, 2011

Raytracer Assignment 2

So I got the ray tracing working for at least non-recursive color picking the other day. Been waiting to put this up until I cleaned it up a bit. I've learned a lot on this assignment aside from how to implement the ray tracer itself:

  1. 90% of the time, if the algorithm is showing anything at all, you're 99% correct and one little bug is holding you back.

  2. Never assume you know math by heart. I've taken and easily aced all levels of calculus, differential equations, matrix algebra and multi-variable calculus; yet the source of almost a solid day's worth of frustration on this assignment this weekend was the result of me misplacing parenthesis in the quadratic formula.

  3. These things are slow. When I first got everything rendering this weekend, I was looking at an average of 7 seconds per frame. That skyrocketed to over half a minute for a simple 2x2 multi-sampling technique, with an exponential increase as the complexity rose. I've profiled my C++ code using a fantastic utility called Very Sleepy which picks up on the debugging symbols that visual studio embeds in my program and can profile individual lines of code with up-to-date source code every time. This helped me narrow down the problematic aspects of my renderer and get that 7 seconds per frame down to a more tolerable 2 seconds. More on this later.



So anyways yeah, it works and it renders the scene as it's supposed to:

From CG2 - Week 2



Once I could stand the wait time for multi-sampled renderings to process, I snapped one of those too at 4x4 = 16 samples per pixel:

From CG2 - Week 2

The results of the 16x multi-sampling look good, but the ray algorithm performs really slowly. I could probably improve it a bit, but I don't think it's worth my time at the moment, and it's probably not the largest bottleneck in the system still.



In fact, this is another important thing I learned during this. I strove for an object-oriented approach to this whole thing, and unfortunately it turns out that all of that overhead that the C-fans rant about actually matters with something as CPU intensive as a ray tracer. I actually believe this is the source of much if not all of my low-performance problems. Multiple levels of data abstraction and hierarchical class design make some operations much slower than they could be. In my efforts to get this down to where it's at now time-wise, I already reduced my Ray3D class to a simple struct with public members. I also dropped STL iterator functionality from the scene's intersection code as well, since Very Sleepy claimed that iterator incrementing was huge. That was probably the biggest boost in speed that I got over all.



Anyways, I'm a little early on this assignment, I think I've got roughly a week until it's due. I plan to work on my project a bit during that time, but if that's looking good by later in the week I just might go back and re-implement parts of mRay's pipeline in straight C. Another possibility I'm entertaining is OpenCL. I'm gonna do a bit of research on that, since if I'm using straight C I might as well do something cool with it. We'll see.

Monday, March 14, 2011

CG2: Project Proposal

For my independent project, I plan to do some experimental work with Volumetric Textures (Hypertextures) based loosely on Ken Perlin's work. Specifically I would like to experiment with using hypertextures as a means for accurately displaying volumetric data in various forms. Some of the things I would like to do include:

 

Volumetric display of MRI data


I have in my possession a series of 280 MRI "slices" of a human head. These are completely anonymous and obtained from Professor Vallino in the SE dept. during a project last year. For that assignment I had to create multi-dimensional reconstructions of the MRI data (specifically from the Sagittal and Coronal planes) from the existing top-down slices. I think it would be very interesting and rewarding to use these images to construct a 3D representation of the data.

 
A screenshot from my original work with the image reconstructions:



 

Volumetric density mapping


In Ken Perlin's paper, he talks about modeling "soft" distributions of data for objects, and in class we used the analogy of the internals of a peach to discuss this. I would like to do something similar, modeling either the structure of a peach or the structure of some other internally-complex object.

 

Three-dimensional Object segmentation


On the above two points, I've begun thinking about the possibilities of "splitting" these hypertextures to simulate fracturing the objects they represent. Using OpenGL's 3D texture support, I believe this would be possible for me to do without overwhelming myself. The support for mapping texture coordinates to vertices should allow me to initially "break" the objects along planes that I construct based on mouse clicking and the camera view. With a little bit of work, I think this would let me interactively peer inside of the objects that I create.

 

Hopefully this is all adequate, and you don't think it is too much or too little work. I'm very interested in all three of these ideas, and I think they'll provide me with enough work over the quarter to keep my busy in between assignments. Plus, the end results SHOULD be very cool!

Thursday, March 10, 2011

CG2: Week 1 Assignment

Setting the scene:


From CG2 - Week 1

Note: The dark artifact on the smaller sphere I believe is a result of using a combination of OpenGL's alpha blending and built-in lighting. I could have played with the blending settings and probably gotten rid of it, or at least done all the lighting with shaders, but as it wasn't required I just let it be.

Details


Camera

Position: (0.0, 0.0, 0.0)

Lookat: (0.0, 0.0, 1.0)

Up Vector: (0.0, 1.0, 0.0)

Floor

Front-left Corner: (-1.0, -0.5, 1.0)

Back-right Corner: (1.0, -0.5, -5.0)



Back Sphere (smaller)

Position: (-0.35, -0.2, -2.0)

Radius: 0.25



Front Sphere (larger)

Position: (0.0, 0.0, -1.6)

Radius: 0.30



Light

Position: (3.0, 3.0, -10.0)

Ambient Color: (0.0, 0.0, 0.0)

Diffuse Color: (1.0, 1.0, 1.0)

Specular Color: (1.0, 1.0, 1.0)